The War Department is pushing for the repeal of previously agreed limits on surveillance and autonomous weapons
Anthropic, one of Silicon Valley’s leading artificial intelligence firms, is locked in a standoff with the Pentagon over how powerful AI systems can be used for warfare and surveillance.
The controversy centers on Anthropics’ cloud chatbot, which is running on US military classified networks and was reportedly used to plan an operation to capture Venezuelan President Nicolas Maduro.
The War Department (formerly the Defense Department) blacklisted the company “Supply Chain Risk” at 5:01 pm Eastern Time (22:01 GMT) on Friday after ignoring an ultimatum to lift key safeguards.
Read more:
Pentagon Designates Major AI Contractors as ‘National Security Risk’
President Donald Trump simultaneously ordered all federal agencies to stop using Anthropic’s technology, threatening severe legal consequences if the company refused to cooperate during the six-month phase-out period.
Why a cloud chatbot matters to the Pentagon

The cloud is deeply embedded in US defense workflows. The company said its models are already used in national security agencies for intelligence analysis, simulations, operational planning, cyber operations and others. “Mission-Critical” functions.
Anthropic was the first AI firm to deploy systems on the Pentagon’s classified networks, signing a $200 million contract with the War Department last summer.
Other major AI providers have so far reached agreements to run their models on the military’s unclassified systems, placing the cloud in a privileged position within the US defense establishment.
What are the Pentagon’s demands?
Within Anthropic’s acceptable-use policy for the War Department are clear prohibitions on using the cloud for mass domestic surveillance and fully autonomous weapons. Those contractual safeguards reflect the company’s internal rules.

The Pentagon has called for those limits to be lifted. Officials say the system should be able to be used “for all legal purposes” And according to US media, the organization has been forced to provide “pure” A version of the model is stripped of moral and ethical constraints.
“You cannot conduct tactical operations with exception” An unnamed Pentagon official insisted, citing CNN “Legitimacy is the Pentagon’s responsibility as the end user.” The military argues that it cannot be in crisis and must ask a private contractor for permission to remove the guardrails.
US Secretary of War Pete Hegseth, who met with Anthropic CEO Dario Amodei this week, has publicly complained that the Pentagon doesn’t need a neural network. “It can’t fight” and threatened to designate anthropic A “Supply Chain Risk” – a label usually reserved for organizations seen as an extension of foreign adversaries.
Read more:
Top AIs deploy nukes in 95% of war game simulations – study
Pentagon spokesman Sean Parnell defended the military “We have no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement” But emphasized: “We don’t allow any company to dictate the rules on how we make operational decisions.”
What are Anthropic’s red lines?
Anthropic says it is willing to work with US national security agencies, but leaves two key restrictions on how its systems are used.

“Threats do not change our position: we cannot in good conscience accede to their request” Amodei said in a statement Thursday, adding to the Pentagon’s demands “Never were included in our contracts … and we believe they should not be included now.”
The company has set two clear red lines for its AI, declaring that it will not support mass domestic surveillance or fully autonomous weapons. It argues that there is a great deal of surveillance on Americans “Incompatible with democratic values” And today’s models “Not reliable enough” Making fatal decisions without human control.
Amode insisted that these carve-outs did not prevent the US military from using the cloud for others. “Mission-Critical” Tasks and the agency say they still want to support US national security — but not at the expense of enabling mass surveillance or outright autonomous killing at home.
Can Anthropic stay blacklisted?
Amode says the War Department warned that if Anthropic kept its safeguards, it could be removed from military systems and make the aforementioned announcement. “Supply Chain Risk” – A designation never before applied to an American institution.
Losing a deal worth $200 million to Anthropic, which is worth nearly $400 billion, is non-existent, but such a label could become more rigid. Any company doing business with the Pentagon must prove its own systems don’t rely on Anthropic’s technology, which could complicate or chill big business deals with firms that supply the U.S. military.
For the War Department, cutting ties is also costly. Officials will have to replace internal tools built around the cloud. Elon Musk’s Grok AI system, a Pentagon source told US media “Using in a classified setting” But Grock acknowledged that he could not be considered as advanced as an anthropic model.
Will Silicon Valley push back?
The developer’s stance has sparked an unusual wave of public support in Silicon Valley. Late Thursday, hundreds of current employees at Google and OpenAI — two of Anthropic’s main rivals, which also supply AI models to the US military — signed an open letter supporting the company’s refusal to comply with the Pentagon’s demands.

An application has been submitted ‘We shall not be divided’, As of Friday, 421 Google and 76 OpenAI employees had publicly signed on. Citing US media reports, the letter alleged that the War Department had targeted Anthropic “Stick to their red lines of not allowing their models to be domestic mass surveillance and autonomously kill people without human supervision.”
“Pentagon in talks with Google and OpenAI to get Anthropic to admit denials” The signatories wrote accusing the authorities of trying “For fear that the other company will offer to divide each company.” The letter calls out the leadership of the two organizations “Put aside their differences and stand together to continue to deny” Demands of the Department of Defense.
What Showdown Means for AI
The clash between Anthropic and the Pentagon has drawn interest from technology and defense analysts, who warn it could set precedents for how powerfully AI will be wielded in any future conflicts. Adam Conner, vice president of technology policy at the Center for American Progress, told US media that the industry-wide dispute is likely to be read as a sign that defense officials do not want contractual limits on how military users can deploy advanced models.
The Pentagon’s move marks a historic surge, observers say, in turning one of America’s most advanced commercial AI products into an alternative within its own defense ecosystem. Gregory Allen, a senior adviser at the Center for Strategic and International Studies, argued that treating Anthropic this way would burn out one of the US tech sectors. “Crown Jewels” At a time when Washington is comparing the AI ​​race with China to the space race with the Soviet Union. He suggested that there are better ways to resolve the dispute “totalitarian” The stance taken by the Trump administration.






