OpenAI signed an agreement with the US Pentagon on Friday to deploy its artificial intelligence models in classified military networks, hours after President Donald Trump ordered all federal agencies to cease using technology from rival Anthropic following a breakdown in negotiations over ethical restrictions.
Sam Altman, OpenAI’s chief executive, announced the deal late on Friday, saying the agreement enshrines three core restrictions: no use of OpenAI technology for domestic mass surveillance, no use to direct autonomous weapons systems, and no use for high-stakes automated decisions such as social credit-style systems. The CEO wrote on X that the Department of War – a recent rebrand of the Department of Defence by executive order – “agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
The agreement comes after months of standoff between Anthropic and Pentagon officials, who had pushed for unrestricted access to the company’s Claude system. Anthropic held firm on two of the same restrictions, prohibiting mass surveillance and fully autonomous weapons, and Trump, writing on Truth Social, accused the company of making a “DISASTROUS MISTAKE trying to STRONG-ARM the Pentagon.”
Anthropic said on Friday it would legally challenge a “supply chain risk” designation imposed by Defence Secretary Pete Hegseth, a classification normally reserved for companies with ties to foreign adversaries that would require all military contractors to prove their work does not involve Anthropic’s products. “No amount of intimidation or punishment from the Pentagon will change our position on mass domestic surveillance or fully autonomous weapons,” the company said in a statement.
OpenAI said its deal offers stronger protections than earlier classified AI agreements because deployment is restricted to cloud infrastructure only, preventing edge-device use that could power autonomous lethal weapons, and because cleared OpenAI engineers will remain embedded with the Pentagon. “Other AI labs have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards,” the company said, arguing its layered approach better prevents misuse.
Altman said he would ask the Pentagon to offer the same contract terms to all AI companies. Emil Michael, the Pentagon’s under secretary for technology, wrote on X that “having a reliable and steady partner that engages in good faith makes all the difference as we enter into the AI Age.”
The deal landed as employee tensions ran high across the industry. Nearly 500 OpenAI and Google staff had signed an open letter titled “We Will Not Be Divided,” expressing solidarity with Anthropic. Altman told CNBC in an interview on Friday that “for all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety.”
OpenAI separately announced on Friday it had closed a $110 billion funding round that values the company at $840 billion.






