The US military reportedly used Anthropic during a major airstrike on Iran, just hours after President Donald Trump ordered federal agencies to stop using the company’s systems.
Military commands, including the U.S. Central Command (CENTCOM) in the Middle East, have used Anthropic’s Claude AI model to support operations, according to people familiar with the matter cited by The Wall Street Journal. The tool reportedly helped in intelligence analysis, identifying potential targets and running battlefield simulations.
The case shows how deeply advanced AI systems have penetrated defense operations. Even as the administration moved to sever ties with the company, Claude remained committed to the military process.
On Friday, the Trump administration ordered agencies to stop working with the company and directed the Defense Department to review it as a potential security risk. The order came after contract negotiations broke down and Anthropic refused to grant unrestricted military use of AI for any legitimate scenario requested by the defense.
related to: Crypto VC Paradigm expands into AI, robotics with $1.5 billion fund: WSJ
Anthropic’s Claude AI is used for classified operations
Anthropic previously secured a multi-year Pentagon contract worth up to $200 million, along with several other major AI labs. Through a partnership involving Palantir and Amazon Web Services, Claude was approved for classified and operational intelligence streams. The system has also reportedly been involved in previous operations, including a January mission in Venezuela that led to the arrest of President Nicolás Maduro.
Tensions escalated after Defense Secretary Pete Hegseth asked the company to allow unrestricted military use of its models. Anthropic CEO Dario Amodei rejected the request, describing some programs as ethical boundaries that the company would not cross, even if it meant losing government business.
In response, the Pentagon launched a series of replacement providers and reached an agreement with OpenAI to deploy its AI models on classified military networks.
related to: Pantera, Franklin Templeton join the Sentient Arena to test the AI agents
Anthropic CEO suspends Pentagon ban
In an interview on Saturday, Anthropic CEO Dario Amodei said the company opposes the use of its AI models for internal mass surveillance and fully autonomous weapons, responding to a US government directive that labeled the company a “supply chain risk” and banned contractors from using its products.
He argued that some applications cross fundamental boundaries and stressed that military decisions should be under human control rather than being left entirely to machines.
Magazine: Bitcoin May Take 7 Years to Upgrade to Post-Quantum – BIP-360 co-author




