Anthropic has rejected the United States Department of Defense’s ultimatum to remove safety restrictions from its artificial intelligence model Claude, putting a $200 million contract at risk after chief executive Dario Amodei declared the company “cannot in good conscience” comply with the government’s demands.
The standoff centres on Anthropic’s refusal to allow Claude to be used for mass domestic surveillance of Americans or in fully autonomous weapons systems that operate without human oversight. The Pentagon issued a deadline of 5:01 pm ET on Friday, threatening to cancel its contract with the company and designate it a supply chain risk if it did not agree to permit all lawful uses of the model without restriction.
Amodei wrote in a statement on Thursday that the threats would not alter the company’s position. “Our strong preference is to continue to serve the Department and our warfighters – with our two requested safeguards in place,” he said, adding that the company would “work to enable a smooth transition to another provider” should the Pentagon proceed with terminating the contract.
Pentagon spokesman Sean Parnell said on Thursday that the department has “no interest” in using AI for mass surveillance of Americans or to develop autonomous weapons without human involvement, and framed the dispute as straightforward.
“This is a simple, common-sense request that will prevent Anthropic from jeopardising critical military operations,” Parnell wrote on X. Undersecretary of Defense Emil Michael went further, calling Amodei a “liar” with a “God-complex” and insisting the Pentagon would “ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”
A source close to Anthropic told Reuters that the company was not accusing the Pentagon of planning to use AI for those purposes, but was instead making a product safety judgement. The source said AI systems were insufficiently reliable for “life-or-death targeting” due to unpredictable behaviour in novel situations, which could cause “friendly fire, mission failure or unintended escalation.”
The Pentagon has also threatened to invoke the Defence Production Act, a Cold War-era statute that would allow the government to compel use of Anthropic’s tools without a contractual agreement. Katie Sweeten, a tech lawyer and former Department of Justice official, told Politico the approach was contradictory. “I don’t know how you can both use the DPA to take over this product and also at the same time say this product is a massive national security risk,” she said.
Alan Rozenshtein, associate professor of law at the University of Minnesota Law School, told the Financial Times that a supply chain risk designation would be “pretty far outside what the statute possibly constitutes,” adding that Anthropic likely has strong legal defences. The designation has historically been reserved for foreign adversaries such as China’s Huawei.
More than 200 employees from Google and OpenAI signed an open letter backing Anthropic’s position. Rivals including OpenAI, Google and Elon Musk’s xAI have all agreed to allow the Pentagon to use their models for all lawful purposes, with xAI also clearing access to classified systems this week.

.jpg)



