Anthropic is suing the U.S. Department of Defense and other federal agencies after the Trump administration designated it as a “supply chain threat” and restricted its business with defense contractors.
The appointment comes after negotiations over Anthropic’s refusal to allow its AI systems to be used for mass surveillance of Americans or autonomous weapons prompted the government to halt acceptance of its systems and jeopardize a Pentagon deal worth up to $200 million.
The Pentagon insists that Anthropic’s AI models must use the technology for all legitimate purposes.
The FT said last week that Anthropic CEO Dario Amodei had held last-minute talks with defense chiefs to defuse tensions, but the effort failed to materially prevent an official blacklist.
The San Francisco-based company argues that the classification lacks legal merit, saying the lawsuit is necessary to protect its business and partnerships as it continues to fight with the government.
“Seeking judicial review does not change our longstanding commitment to using AI to protect our national security, but it is a necessary step to protect our business, our customers and our partners,” an Anthropic spokesperson told CNN.
Anthropic’s consumer business has shown resilience despite the government controversy.
The company’s Claude app surpassed OpenAI’s ChatGPT in the Apple App Store rankings for the first time immediately after the news of the Pentagon’s termination of the contract.
In early March, Anthropic reported that more than one million users were signing up for Claude every day.
Google has confirmed that it will continue to provide Anthropic’s AI technology to its cloud customers for non-defense purposes, after the Pentagon classified the company as a supply chain threat.
Microsoft issued a similar statement, while Amazon also said it would continue to use Anthropic services outside of defense.





