Close Menu
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now
OpenAI snags 0 billion in investments from Amazon, Nvidia, and Softbank

OpenAI snags $110 billion in investments from Amazon, Nvidia, and Softbank

27 February 2026
Data Broker Breaches Fueled Nearly  Billion in Identity-Theft Losses

Data Broker Breaches Fueled Nearly $21 Billion in Identity-Theft Losses

27 February 2026
AI disruption risk varies between platform and service-based firms, says new report

AI disruption risk varies between platform and service-based firms, says new report

27 February 2026
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact
Facebook X (Twitter) Instagram Pinterest VKontakte
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release
Tech News VisionTech News Vision
Home » Anthropic refuses Pentagon demand to strip AI safeguards from Claude
What's On

Anthropic refuses Pentagon demand to strip AI safeguards from Claude

News RoomBy News Room27 February 2026Updated:27 February 2026No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email
Anthropic refuses Pentagon demand to strip AI safeguards from Claude

Anthropic has rejected the United States Department of Defense’s ultimatum to remove safety restrictions from its artificial intelligence model Claude, putting a $200 million contract at risk after chief executive Dario Amodei declared the company “cannot in good conscience” comply with the government’s demands.

The standoff centres on Anthropic’s refusal to allow Claude to be used for mass domestic surveillance of Americans or in fully autonomous weapons systems that operate without human oversight. The Pentagon issued a deadline of 5:01 pm ET on Friday, threatening to cancel its contract with the company and designate it a supply chain risk if it did not agree to permit all lawful uses of the model without restriction.

Amodei wrote in a statement on Thursday that the threats would not alter the company’s position. “Our strong preference is to continue to serve the Department and our warfighters – with our two requested safeguards in place,” he said, adding that the company would “work to enable a smooth transition to another provider” should the Pentagon proceed with terminating the contract.

Pentagon spokesman Sean Parnell said on Thursday that the department has “no interest” in using AI for mass surveillance of Americans or to develop autonomous weapons without human involvement, and framed the dispute as straightforward.

“This is a simple, common-sense request that will prevent Anthropic from jeopardising critical military operations,” Parnell wrote on X. Undersecretary of Defense Emil Michael went further, calling Amodei a “liar” with a “God-complex” and insisting the Pentagon would “ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”

A source close to Anthropic told Reuters that the company was not accusing the Pentagon of planning to use AI for those purposes, but was instead making a product safety judgement. The source said AI systems were insufficiently reliable for “life-or-death targeting” due to unpredictable behaviour in novel situations, which could cause “friendly fire, mission failure or unintended escalation.”

The Pentagon has also threatened to invoke the Defence Production Act, a Cold War-era statute that would allow the government to compel use of Anthropic’s tools without a contractual agreement. Katie Sweeten, a tech lawyer and former Department of Justice official, told Politico the approach was contradictory. “I don’t know how you can both use the DPA to take over this product and also at the same time say this product is a massive national security risk,” she said.

Alan Rozenshtein, associate professor of law at the University of Minnesota Law School, told the Financial Times that a supply chain risk designation would be “pretty far outside what the statute possibly constitutes,” adding that Anthropic likely has strong legal defences. The designation has historically been reserved for foreign adversaries such as China’s Huawei.

More than 200 employees from Google and OpenAI signed an open letter backing Anthropic’s position. Rivals including OpenAI, Google and Elon Musk’s xAI have all agreed to allow the Pentagon to use their models for all lawful purposes, with xAI also clearing access to classified systems this week.


Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Preorder Google’s Newest Phone and Get a Free 0 Gift Card

Preorder Google’s Newest Phone and Get a Free $100 Gift Card

27 February 2026
The US military reportedly shot down a CBP drone with a laser

The US military reportedly shot down a CBP drone with a laser

27 February 2026
Wall Street Has AI Psychosis

Wall Street Has AI Psychosis

27 February 2026
OpenAI snags 0 billion in investments from Amazon, Nvidia, and Softbank

OpenAI snags $110 billion in investments from Amazon, Nvidia, and Softbank

27 February 2026
Editors Picks
Pokémon Gen 10 Winds and Waves: Every Pokémon Confirmed So Far

Pokémon Gen 10 Winds and Waves: Every Pokémon Confirmed So Far

27 February 2026
The US military reportedly shot down a CBP drone with a laser

The US military reportedly shot down a CBP drone with a laser

27 February 2026
Wall Street Has AI Psychosis

Wall Street Has AI Psychosis

27 February 2026
First Look at Kratos and Atreus in Amazon’s God of War TV Show

First Look at Kratos and Atreus in Amazon’s God of War TV Show

27 February 2026

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now
Tech News Vision
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2026 Tech News Vision. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.