A $200 million contract, two red lines, a presidential broadside, and a threat of bombs over Tehran: how one safety dispute with Anthropic became the defining confrontation over who controls the future of military AI.
As reported by The Guardian, the United States military used an artificial intelligence model developed by Anthropic to support operations during strikes on Iran, even as a federal order to halt use of the company’s technology had been issued hours earlier, according to people familiar with the matter.
U.S. Central Command used Anthropic’s Claude for intelligence assessments, target identification, and simulation of combat scenarios during the operation. The model had been deployed across classified military and intelligence systems through partnerships with Palantir Technologies and Amazon Web Services, under contracts worth up to $200 million.
Big tech in the United States face the ever-looming difficulty of removing an AI system once it has been embedded deep inside live military infrastructure. Experts note that the true cost of replacing an embedded AI model runs far deeper than a phase-out timeline suggests, covering integration costs, retraining, security re-certifications, and parallel testing across classified systems.
The fallout stems from a contractual dispute over the terms governing military use of Anthropic’s technology. The company had refused to remove safeguards preventing its model from being used for mass domestic surveillance or fully autonomous weapons systems, two restrictions it said it would not waive under any circumstances. Negotiations between Anthropic and the Pentagon broke down last week without resolution.
The Defense Department designated Anthropic a supply-chain risk to national security following the breakdown. Anthropic called the designation unprecedented for an American company and said it intends to challenge it in court.
On Friday, just hours before the Iran attack began, Trump ordered all federal agencies to stop using Claude immediately. He denounced Anthropic on Truth Social as a “Radical Left AI company run by people who have no idea what the real World is all about.”
The Pentagon said Anthropic would continue supplying services for up to six months during a transition to alternative providers. OpenAI has since announced a separate deal with the Defense Department covering classified military networks.