Anthropic said Thursday that the Defense Department has designated it a national security threat, a surprising move that bans the company from doing business with the U.S. military and could send shockwaves through the U.S. artificial intelligence industry.
The designation, which the company said it received on Wednesday and which specifically labels Anthropic as a “national security supply chain risk,” requires the Pentagon and its contractors to stop using Anthropic’s artificial intelligence services for all defense businesses.
Defense Secretary Pete Hegseth first telegraphed the move Friday afternoon in a post on X.
The move comes after months of increasingly tense negotiations over how the military should be able to use Anthropic’s Claude AI systems. Although a relatively new technology, generative AI models like Claude have been quickly adopted by the Trump administration, including for military use.
Over the past few months, the Pentagon has been negotiating new contract terms with Anthropic, along with other leading US AI companies, to enable broader military use of AI. While the Pentagon has sought to leverage powerful AI systems for “any lawful use,” Anthropic CEO Dario Amodei wanted stronger assurances that the Pentagon would not use its AI technology for deadly autonomous weapons or mass domestic surveillance.
Amodei confirmed the supply chain risk label in a statement late Thursday and said the company disagreed with it, writing “we do not believe this action is legally sound and we see no option but to challenge it in court.”
“Anthropic has much more in common with the War Department than it does differences,” he wrote in the statement. “We are both committed to advancing America’s national security and defending the American people, and we agree on the urgency of applying AI across government. All of our future decisions will arise from that shared premise.”
Until last week, Anthropic was the only artificial intelligence company whose services were authorized for use on the Department of Defense’s classified networks. Hours after Hegseth announced that he would seek to label Anthropic as a supply chain risk last week, OpenAI CEO Sam Altman announced that his company had reached a new agreement with the Pentagon to use OpenAI’s services in classified environments, which could allow OpenAI to replace much of Anthropic’s current business with the Pentagon.
Elon Musk’s xAI and its Grok AI systems also reached a deal with the Pentagon last week to be cleared for use on classified networks.
In the statement posted on its website Thursday evening, Amodei emphasized that the ban on Anthropic’s business with the military did not apply to contracts with military suppliers for non-defense purposes. Anthropic has extensive commercial agreements with many of the top US technology companies, including Amazon and Microsoft, many of which also have major contracts with the Pentagon.
A senior Defense Department official confirmed that the supply chain risk determination took effect immediately. “From the beginning,” the official told NBC News on Thursday, “this has been a fundamental principle: that the military can use technology for all legal purposes. The military will not allow a supplier to insert itself into the chain of command by restricting the legal use of a critical capability and putting our warfighters at risk.”
In his post announcing the move last Friday, Hegseth wrote that Anthropic would “continue to provide its services to the War Department for a period of no more than six months to allow for a seamless transition to better, more patriotic service.”
“Anthropic provided a master class in arrogance and betrayal, as well as a textbook case on how not to do business with the United States government or the Pentagon,” Hegseth said in the post. “Our position has never wavered and never will waver: the War Department must have full and unrestricted access to Anthropic models for every LEGAL purpose in defense of the Republic.”
In a statement last week during tense contract negotiations and before Hegseth announced the move, Amodei noted that the supply chain risk label, typically reserved for foreign adversaries and partner companies, had “never before been applied to a U.S. company.”
Several legal observers have said the designation likely won’t hold up legally and is instead intended to warn other companies to follow the Pentagon’s line.
The mere threat of such a designation has already upset Washington and the technology industry. Fearing the fallout from the potential decision on supply chain risk, defense experts, Anthropic’s rival OpenAI, and members of Congress had sought to cool tensions between Anthropic and the Pentagon throughout this week.
An influential technology advocacy group, whose members include Nvidia and Apple, sent a letter to Hegseth on Wednesday urging him to refrain from officially applying the supply chain risk label.
Many industry investors fear that by targeting one of the largest and most successful AI companies in the United States, the Department of Defense is setting a dangerous precedent that will scare away investment and chill the US AI industry.
Last Friday, just over an hour before the 5 p.m. ET deadline to reach a deal set by Hegseth earlier in the week, President Donald Trump said he would take steps to exclude Anthropic from other federal agencies.
“The leftist nuts at Anthropic have made a DISASTROUS MISTAKE by trying to STRONG ARM the War Department and force them to obey their Terms of Service instead of our Constitution,” Trump wrote.
The Pentagon already uses Anthropic’s Claude systems as part of an agreement with data analytics company Palantir. According to recent reports from The Washington Post and The Wall Street Journal, Anthropic’s AI systems have been used to help forces assess intelligence and identify targets in the ongoing war in Iran. NBC News has not confirmed those reports.
Anthropic closed its first deal with Palantir in 2024, allowing the Department of Defense to use Anthropic’s services on classified networks, and in July it was awarded another $200 million contract to advance “prototypes of frontier AI capabilities that advance U.S. national security.”
In previous negotiating rounds, Anthropic had agreed to allow the Pentagon to use its artificial intelligence systems for cyber and missile defense purposes.
Some experts noted an apparent disconnect between labeling one of America’s largest AI companies as a national security supply chain risk and refraining from applying the same label to DeepSeek, a leading Chinese AI company that has been accused of unfair practices. DeepSeek did not respond at the time to requests from several news organizations for comment on the matter.
“We are treating an American AI company worse than an AI company controlled by the Chinese Communist Party,” said Michael Sobolik, an expert on AI and China issues and senior fellow at the Hudson Institute. “We cannot hinder the most innovative and successful American companies by asking quintessentially American questions about military use and privacy.”
“The US government risks cutting the legs off one of our best AI companies in the early years of this AI race,” Sobolik continued. “If we do that, where the US border models are qualitatively and quantitatively better than China’s, it seems like cutting off our nose to spite our face.”
Tim Fist, director of emerging technology at the Washington, D.C.-based think tank Institute for Progress, said the new designation would be counterproductive to America’s AI aspirations.
“The supply chain risk designation, typically used on foreign adversaries, is hurting one of America’s leading artificial intelligence companies and making other companies much more reluctant to work with the federal government,” Fist said in written comments. “The designation harms the AI industry and therefore US national security with virtually no benefit.”






