Microsoft has filed a court brief supporting Anthropic’s lawsuit against the Pentagon, urging a federal judge to impose a temporary restraining order blocking the Defence Department’s designation of the AI company as a supply chain risk.
The software group’s filing, submitted Tuesday in the US District Court in San Francisco, marks the first intervention by a major technology company in what has become one of Silicon Valley’s most consequential legal battles with the Trump administration. Microsoft warned that the Pentagon’s “drastic” and “unprecedented” action against Anthropic would have “broad negative ramifications” for the US technology industry.
The dispute stems from Anthropic’s refusal to grant the military unfettered access to its Claude AI models. The company had insisted on contractual guarantees that Claude would not be used for fully autonomous weapons or domestic mass surveillance, a position Defence Secretary Pete Hegseth publicly rejected.
Talks between Anthropic and the Pentagon collapsed late last month, after which Hegseth designated Anthropic a national security supply chain risk on 3 March, a label historically reserved for companies linked to foreign adversaries such as China and Russia.
Microsoft argued that enforcing the designation without pause would force technology contractors to “act immediately to alter existing product and contract configurations” used by the Defence Department.
A Microsoft spokesperson said the company “believes that American AI should not be used to conduct domestic mass surveillance or start a war without human control,” adding that such a position “is consistent with the law and broadly supported by American society.”
Microsoft’s financial exposure to Anthropic’s fate is significant. In November, Microsoft agreed to invest up to $5 billion in Anthropic and signed a $30 billion cloud-services deal with the company. Despite also holding a $135 billion stake in OpenAI, Microsoft has integrated Claude models across its Microsoft 365 Copilot, GitHub Copilot, and AI Foundry platforms, and has told customers that Anthropic’s products will remain available through those services for all non-Defence Department work.
Five national security law experts told Reuters that Anthropic appears to have a strong legal case. The Pentagon invoked Section 3252, an obscure statute that allows the defence secretary to exclude companies from contracts to guard against adversary sabotage of military information systems. This statute has never previously been tested in court against a US company. “It’s not at all clear that the statute can even apply to an American company where there’s no foreign entanglement,” Alan Rozenshtein, a professor at the University of Minnesota Law School, told Reuters.
Joel Dodge, a law expert at Vanderbilt University, told Reuters that public statements from Trump and Hegseth – including Trump’s description of Anthropic as a “RADICAL LEFT WOKE COMPANY” – could strengthen Anthropic’s First Amendment claims. “A lot of things Hegseth has said and the Pentagon has done undermine their case and suggest there was personal animus and bad blood between the parties,” Dodge said.
More than 30 researchers from Google and OpenAI, including Google DeepMind chief scientist Jeff Dean, filed a separate friend-of-the-court brief in personal support of Anthropic. The Pentagon said it does not comment on pending litigation.





