AI

OpenAI Partners with Pentagon Hours After Trump Orders Halt to Anthropic AI Use

Published by

In a dramatic turn of events in the U.S. AI sector, OpenAI has struck a deal with the U.S. Department of Defense (DoD) just hours after President Donald Trump ordered federal agencies to immediately stop using AI tools developed by Anthropic.

OpenAI CEO Sam Altman announced the agreement on X (formerly Twitter), stating,

“Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.” He added that the DoD showed “a deep respect for safety and a desire to partner to achieve the best possible outcome.”

The move comes after Defense Secretary Pete Hegseth labeled Anthropic a “supply chain risk,” a designation usually reserved for foreign adversaries. The classification bars DoD contractors and vendors from utilizing Anthropic’s models, effectively forcing the startup out of federal AI projects.

Anthropic, which was the first lab to deploy its AI models across the DoD’s classified networks, expressed its disappointment over the decision and intends to pursue legal action against the Pentagon. Company representatives noted that they had been negotiating safety and operational terms with the agency, but discussions collapsed amid criticism from government officials who accused the company of being overly cautious.

In a Thursday internal memo, Altman assured OpenAI employees that the company shared the same “red lines” as Anthropic. He emphasized that OpenAI’s AI deployment will adhere to two core safety principles: prohibitions on domestic mass surveillance and ensuring human responsibility for the use of force, including in autonomous weapon systems.

OpenAI plans to implement technical safeguards to ensure responsible AI behavior and will deploy personnel to assist the DoD in managing its AI models safely.

The exact reasons why the Pentagon selected OpenAI over Anthropic remain unclear, but industry observers suggest that Anthropic’s strict safety measures and caution regarding AI use may have influenced the decision.

This development marks a significant escalation in tensions between U.S. authorities and AI startups, highlighting the challenges of balancing national security concerns with AI safety and ethics.

Sabica Tahira

Experienced Content Writer & Creative Strategist I am an experienced writer passionate about creating engaging, research-driven content across technology, AI, fintech, and cryptocurrency. My goal is to inform, inspire, and connect audiences through impactful storytelling while helping brands build trust and a strong digital presence.