
Defense Department Chief Technology Officer Emil Michael said Thursday that Anthropic’s Claude AI models would “pollute” the agency’s supply chain because they have “a different policy preference” built in.
“We cannot allow a company that has a different political preference, built into the model through its constitution, its soul, its political preferences, to contaminate the supply chain so that our warfighters get ineffective weapons, ineffective body armor, ineffective protection,” Michael told CNBC’s “Squawk Box.” “That’s really where the supply chain risk designation came from.”
Anthropic is the first American company to be publicly labeled a supply chain risk, an extraordinary measure that has historically been reserved for foreign adversaries. The designation will require defense contractors and suppliers to certify that they do not use Claude in their work with the Pentagon.
The startup sued the Trump administration on Monday, calling the government’s actions “unprecedented and illegal.” Anthropic said in a filing that the company was suffering “irreparable” damage and that hundreds of millions of dollars in contracts are at risk.
“This is not intended to be a punishment,” Michael said Thursday.
He added that Anthropic has a “huge commercial business” and that a “small fraction” comes from the US government. Michael also dismissed Anthropic’s claim that the government had actively reached out to companies and told them not to use Anthropic, calling the notion “rumors.”
“The War Department doesn’t approach companies to tell them what to do as long as it’s not in our supply chain,” he said.
Anthropic was founded in 2021 by a group of researchers and executives who defected from OpenAI. The company is best known for its Claude family of models and was successful early on selling to large companies, including the DOD.





