Close Menu
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now

Nothing busted using professional photos as Phone 3 samples

27 August 2025

NASA’s Largest Satellite Antenna Ever Has Just Unfurled in Space

27 August 2025

Amazon plans to ‘deploy Kuiper satellites in Vietnam’

27 August 2025
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact
Facebook X (Twitter) Instagram Pinterest VKontakte
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release
Tech News VisionTech News Vision
Home » Anthropic unveils Claude Gov models for US defence and intelligence agencies
What's On

Anthropic unveils Claude Gov models for US defence and intelligence agencies

News RoomBy News Room6 June 2025Updated:6 June 2025No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email

Anthropic has launched Claude Gov, a specialised artificial intelligence service designed exclusively for US defence and intelligence agencies, marking the company’s latest push into the government sector.

The AI firm announced on Thursday that its new models have been custom-built to handle classified information and support national security operations, with “looser guardrails” compared to consumer-facing versions of Claude.

“What makes Claude Gov models special is that they were custom-built for our national security customers,” said Thiyagu Ramasamy, head of public sector at Anthropic. “By understanding their operational needs and incorporating real-world feedback, we’ve created a set of safe, reliable, and capable models that can excel within the unique constraints and requirements of classified environments.”

The models are already deployed by agencies “at the highest level of US national security,” according to Anthropic, though the company declined to specify how long they have been in use or provide usage statistics.

Claude Gov models are designed to handle government-specific tasks including threat assessment, intelligence analysis, and strategic planning. Key features include improved handling of classified materials, with the models programmed to “refuse less when engaging with classified information” that consumer versions would typically flag and avoid.

The models also offer enhanced understanding of defence and intelligence documents, better proficiency in languages and dialects relevant to national security, and improved interpretation of complex cybersecurity data.

Access to Claude Gov will be limited to government agencies handling classified information, with the models capable of operating in top secret environments.

The launch follows the creation of contractual exceptions to Anthropic’s usage policy at least eleven months ago, which are “carefully calibrated to enable beneficial uses by carefully selected government agencies.” Whilst certain restrictions remain prohibited—including disinformation campaigns, weapons design, and malicious cyber operations—Anthropic can “tailor use restrictions to the mission and legal authorities of a government entity.”

Claude Gov represents Anthropic’s response to OpenAI’s ChatGPT Gov, launched in January for US government agencies. OpenAI reported that over 90,000 federal, state, and local government employees had used its technology within the past year for tasks including document translation, policy memo drafting, and application building.

The move reflects a broader trend of AI companies seeking government contracts in an uncertain regulatory landscape. Scale AI recently signed deals with the Department of Defense and Qatar’s government, whilst Anthropic participates in Palantir’s FedStart programme for federal government-facing software deployment.

The use of AI by government agencies has faced scrutiny due to documented cases of bias in facial recognition systems, predictive policing, and welfare assessment algorithms, alongside concerns about impacts on minorities and vulnerable communities.


Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Over half of UK public concerned about impact of AI on jobs, finds TUC poll

27 August 2025

Do You Really Need to Brush Your Pets Teeth?

27 August 2025

Charles Clinkard boosts online reputation with AI platform

27 August 2025

Nothing busted using professional photos as Phone 3 samples

27 August 2025
Editors Picks

Yes, Another Clair Obscur Game Is Coming — ‘Expedition 33 is One of the Stories That We Want to Tell in This Franchise,’ Teases Director 

27 August 2025

Do You Really Need to Brush Your Pets Teeth?

27 August 2025

Charles Clinkard boosts online reputation with AI platform

27 August 2025

‘It Almost Destroyed Me’: Game of Thrones Teen Star Issues Warning to New Harry Potter Actors Over Fame and Social Media

27 August 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now
Tech News Vision
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2025 Tech News Vision. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.