Artificial intelligence company Anthropic sues the US Department of Defense for blacklisting it | Technology


Anthropic filed two lawsuits against the Department of Defense on Monday, alleging that the government’s decision to label the artificial intelligence company a “supply chain risk” was illegal and violated its First Amendment rights. The two sides have been locked in a months-long heated dispute over the company’s attempt to implement safeguards against the military’s potential use of its artificial intelligence models for mass domestic surveillance or fully autonomous lethal weapons.

The lawsuits, which Anthropic filed in the Northern District Court of California and the US Court of Appeals for the Washington DC Circuit, come after the Pentagon formally issued the supply chain risk designation last Thursday, the first time the blacklisting tool has been used against a US company. The AI ​​firm previously vowed to challenge the designation and its demand that any company doing business with the government cut all ties with Anthropic, a serious threat to its business model.

Anthropic’s lawsuit contends that the Trump administration is punishing the company for its refusal to comply with the government’s ideological demands, in a violation of its protected speech and an attempt to punish the company for failing to comply.

“These actions are unprecedented and illegal. The Constitution does not allow the government to exercise its enormous power to punish a company for its protected speech,” Anthropic said in its California lawsuit.

Anthropic’s AI model, called Claude, has been deeply integrated into the Department of Defense over the past year. Until recently, Claude was also the only AI mode approved for use in classified systems. The Department of Defense has reportedly used it extensively in its military operations, including deciding where to target missile strikes in its war against Iran.

Anthropic emphasized in its lawsuit that it was still committed to providing AI for national security purposes. The company also stated in its California lawsuit that it had previously collaborated with the Department of Defense to modify its systems for unique use cases. The company also wants to continue its negotiations with the government, according to a statement.

“Seeking judicial review does not change our long-standing commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers and our partners,” an Anthropic spokesperson said in a statement to The Guardian. “We will continue to pursue all avenues toward resolution, including dialogue with the government.”

The AI ​​firm alleged in the lawsuit that punitive actions by the Trump administration and the Pentagon are “irreparably harming Anthropic,” an allegation that somewhat contradicts CEO Dario Amodei, who told CBS News last week that “the impact of this designation is quite small” and that the company “was going to be fine.”

“Defendants seek to destroy the economic value created by one of the world’s fastest-growing private companies, which is a leader in the responsible development of an emerging technology of vital importance to our nation,” Anthropic alleged in its lawsuit.

The Defense Department did not immediately respond to a request for comment.

Add Comment