Close Menu
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now
Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators

Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators

13 April 2026
Pokémon Go Pro Accused of ‘Unsportsmanlike’ Celebration Says He’s Fighting Decision That Stripped Him of Championship Title

Pokémon Go Pro Accused of ‘Unsportsmanlike’ Celebration Says He’s Fighting Decision That Stripped Him of Championship Title

13 April 2026
SwitchBot’s button-pressing robot is now available with a rechargeable battery

SwitchBot’s button-pressing robot is now available with a rechargeable battery

13 April 2026
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact
Facebook X (Twitter) Instagram Pinterest VKontakte
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release
Tech News VisionTech News Vision
Home » OpenAI seals Pentagon AI deal with safety guardrails hours after Anthropic ban
What's On

OpenAI seals Pentagon AI deal with safety guardrails hours after Anthropic ban

News RoomBy News Room2 March 2026Updated:2 March 2026No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email
OpenAI seals Pentagon AI deal with safety guardrails hours after Anthropic ban

OpenAI signed an agreement with the US Pentagon on Friday to deploy its artificial intelligence models in classified military networks, hours after President Donald Trump ordered all federal agencies to cease using technology from rival Anthropic following a breakdown in negotiations over ethical restrictions.

Sam Altman, OpenAI’s chief executive, announced the deal late on Friday, saying the agreement enshrines three core restrictions: no use of OpenAI technology for domestic mass surveillance, no use to direct autonomous weapons systems, and no use for high-stakes automated decisions such as social credit-style systems. The CEO wrote on X that the Department of War – a recent rebrand of the Department of Defence by executive order – “agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

The agreement comes after months of standoff between Anthropic and Pentagon officials, who had pushed for unrestricted access to the company’s Claude system. Anthropic held firm on two of the same restrictions, prohibiting mass surveillance and fully autonomous weapons, and Trump, writing on Truth Social, accused the company of making a “DISASTROUS MISTAKE trying to STRONG-ARM the Pentagon.”

Anthropic said on Friday it would legally challenge a “supply chain risk” designation imposed by Defence Secretary Pete Hegseth, a classification normally reserved for companies with ties to foreign adversaries that would require all military contractors to prove their work does not involve Anthropic’s products. “No amount of intimidation or punishment from the Pentagon will change our position on mass domestic surveillance or fully autonomous weapons,” the company said in a statement.

OpenAI said its deal offers stronger protections than earlier classified AI agreements because deployment is restricted to cloud infrastructure only, preventing edge-device use that could power autonomous lethal weapons, and because cleared OpenAI engineers will remain embedded with the Pentagon. “Other AI labs have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards,” the company said, arguing its layered approach better prevents misuse.

Altman said he would ask the Pentagon to offer the same contract terms to all AI companies. Emil Michael, the Pentagon’s under secretary for technology, wrote on X that “having a reliable and steady partner that engages in good faith makes all the difference as we enter into the AI Age.”

The deal landed as employee tensions ran high across the industry. Nearly 500 OpenAI and Google staff had signed an open letter titled “We Will Not Be Divided,” expressing solidarity with Anthropic. Altman told CNBC in an interview on Friday that “for all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety.”

OpenAI separately announced on Friday it had closed a $110 billion funding round that values the company at $840 billion.


Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

You can make a multicolor MacBook Neo out of Apple’s spare parts

You can make a multicolor MacBook Neo out of Apple’s spare parts

13 April 2026
Staunch Trump Supporters Are Now Asking If He’s the Antichrist

Staunch Trump Supporters Are Now Asking If He’s the Antichrist

13 April 2026
Microsoft is testing OpenClaw-like AI bots for 365 Copilot

Microsoft is testing OpenClaw-like AI bots for 365 Copilot

13 April 2026
Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators

Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators

13 April 2026
Editors Picks
You can make a multicolor MacBook Neo out of Apple’s spare parts

You can make a multicolor MacBook Neo out of Apple’s spare parts

13 April 2026
Staunch Trump Supporters Are Now Asking If He’s the Antichrist

Staunch Trump Supporters Are Now Asking If He’s the Antichrist

13 April 2026
First Light Footage Shared Online

First Light Footage Shared Online

13 April 2026
Microsoft is testing OpenClaw-like AI bots for 365 Copilot

Microsoft is testing OpenClaw-like AI bots for 365 Copilot

13 April 2026

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now
Tech News Vision
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2026 Tech News Vision. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.