March 6, 2026
3 my read
Add us on GoogleAdd SciAm
How does the Pentagon evict Claude?
Replacing one AI model on a classified network with another takes minutes. Retraining people who have learned to trust it will take much longer

The Department of Defense is phasing Anthropics Claude out of its classified networks within six months, triggering a complex transition for military personnel.
AFP/Stringer/Getty Images
The Pentagon has put Anthropic on the clock. On Thursday, the Ministry of Defense formally notified the company that it has been deemed a “supply chain risk” – a label that has made its artificial intelligence systems, including its flagship model, Claude – a liability.
The move escalates a dispute that has raged for weeks over Anthropic’s safety-first ethos — its commitment to limiting how its technology is deployed — and DOD’s demands for unfettered control.
The Pentagon is phasing out Claude, one of the world’s most advanced AI models, from its classified networks within six months. On paper, it seems quick to replace one model with another. “It’s easy to swap out the models and install new ones,” according to a source close to Palantir — a defense technology giant that has partnered with Anthropic to host Claude on secure military networks.
On supporting science journalism
If you like this article, please consider supporting our award-winning journalism by subscribes. By purchasing a subscription, you help secure the future of impactful stories about the discoveries and ideas that shape our world today.
The hardest part begins after the model is gone, rewiring everything built around it.
Claude is what is known as a frontier model, an artificial intelligence capable of performing complex multi-step tasks on its own. That is not how DOD currently uses it. Lauren Kahn, a researcher at Georgetown University’s Center for Security and Emerging Technology and a former Pentagon official, describes the deployment as more like a chatbot than a free-roaming agent. Claude sits “on top” of existing software, she says, appearing only in certain places — tightly controlled corners of a classified environment. And it’s not connected to “effectors,” she says, meaning it can’t “launch an effect” — a weapon command, for example — “in the real world.”
In late 2024, Anthropic became the first AI company to clear the Pentagon’s classified obstacles. Until recently, Claude was the only major language model known to operate in that environment. Accessed via tools like Claude Gov — which became a preferred option for some defense personnel, according to Bloomberg — the system uses vast data pipelines to turn a flood of unstructured information into readable intelligence. In other words, Claude summarizes information for the Ministry of Defense, but it cannot pull a trigger.
Once people trust a tool, it can be hard to let it go. Each integration must be completed bit by bit. And whatever replaces Claude must clear rigorous security reviews and clearances before it touches a classified system. Software changes inside the Pentagon can be “excruciating,” Kahn says. Even something as simple as installing Microsoft Office “takes months and months and months.”
At press time, Anthropic did not respond to multiple requests for comment Scientific American. The Ministry of Defense declined to discuss the details of the transition.
Unlearning Claude
Each AI model fails in its own characteristic ways. Operators who have spent months using Claude learn these idiosyncrasies through trial and error: what leads to bad landings, which exits require a second look.
Kahn studies automation bias, the tendency of human operators to over-delegate to machines. “I worry about a slightly increased risk of automation bias in the early stages as they work out the kinks,” she says. People will check for Claude’s mistakes while the replacement model makes new ones. The personnel most exposed to the transition will be the power users who built the most customized workflows and learned the model’s drawbacks well enough to leverage its strengths.
As Pentagon personnel prepare for the operational transition, the messy details of the political conflict have become visible to the public. Late Thursday, Anthropic CEO Dario Amodei published a blog post vowing to challenge the government’s “supply chain risk” designation in court, arguing that the statute is usually reserved for foreign adversaries. Behind the scenes, the conflict seems to have developed into a game of chicken. Emil Michael, the Pentagon official who has led the department’s negotiations with Anthropic, wrote on X that talks with the company have died down. And Amodi is reportedly fighting to revive them.
Meanwhile, the DOD is already moving on. Within hours of Anthropic’s official blacklisting, OpenAI announced that it had signed an agreement to deploy its models on the military’s classified network, securing the contract its rival had just lost.
Anthropic was willing to risk eviction from the US government rather than compromise its security first. The replacement initially accepted the Pentagon’s demands for unfettered operational flexibility—only to quickly add the very surveillance railings that Anthropic advocated after OpenAI CEO Sam Altman faced massive internal and public backlash. The change may not be so simple after all.
It’s time to stand up for science
If you liked this article, I would like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in its two-century history.
I have been one Scientific American subscriber since I was 12 years old, and it helped shape the way I see the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does for you too.
If you subscribe to Scientific Americanyou help ensure our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten laboratories across the United States; and that we support both budding and working scientists at a time when the value of science itself is too often not recognised.
In return, you receive important news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-see videos, challenging games, and the world of science’s best writing and reporting. You can even give someone a subscription.
There has never been a more important time for us to stand up and show why science is important. I hope you will support us in that mission.






