Why the Pentagon just called anthropy a ‘national security risk’


anthropicAnthropic said it was seeking to restrict the use of its technology to two high-level uses: mass surveillance of Americans and fully autonomous weapons. (AP Photo)

Anthropic is suing the Trump administration, asking federal courts to overturn the Pentagon’s decision designating the AI ​​company as a “supply chain risk” for its refusal to allow unrestricted military use of its technology.

Anthropic filed two separate lawsuits on Monday, one in federal court in California and another in the federal appeals court in Washington, D.C., each challenging different aspects of the Pentagon’s actions against the company.

Last week, the Pentagon formally designated the San Francisco tech company as a supply chain risk after an unusually public dispute over how its AI chatbot Claude could be used in war.

“These actions are unprecedented and unlawful,” Anthropic’s lawsuit says. “The Constitution does not allow the government to exercise its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and stop the Executive’s illegal campaign of retaliation.”

The Defense Department declined to comment Monday.

Anthropic said it was seeking to restrict the use of its technology to two high-level uses: mass surveillance of Americans and fully autonomous weapons. Defense Secretary Pete Hegseth and other officials insisted that the company must accept “all legal uses” of Claude and threatened punishment if the company did not comply.

Story continues below this ad.

Designating the company as a supply chain risk disrupts Anthropic’s defense work using authority that was designed to prevent foreign adversaries from damaging national security systems. It was the first time the federal government is known to have used the designation against a U.S. company.

President Donald Trump also said he would order federal agencies to stop using Claude, although he gave the Pentagon six months to phase out a product that is deeply embedded in classified military systems, including those used in the Iran war.

Even as it fights the Pentagon’s actions, Anthropic has tried to convince companies and other government agencies that the Trump administration’s penalty is limited and only affects military contractors when they use Claude to work for the Department of Defense.

Making that distinction clear is crucial for privately held Anthropic because most of its projected $14 billion in revenue this year comes from companies and government agencies that use Claude for computer coding and other tasks.

Story continues below this ad.

More than 500 clients are paying Anthropic at least $1 million a year for Claude, based on an investment that had valued the company at $380 billion.


Add Comment