OpenAI changes deal with Pentagon as critics warn of surveillance


OpenAI CEO Sam Altman unveiled a restructured deal with the Pentagon on Monday night to govern the Defense Department’s use of its AI services, which he says provides strong guarantees that the military won’t use OpenAI’s system for domestic surveillance.

According to a post on OpenAI’s website, the new agreement states that “AI systems cannot be used for intentional domestic surveillance of US persons and nationals.” OpenAI faced some backlash as news of an initial deal between a major AI company and the Pentagon emerged on Friday. Many observers claimed that the source language shared on OpenAI’s website provided enough loopholes for the government to monitor Americans.

The move comes after weeks of heated discussions between rival AI company Anthropic and the Pentagon over how the military could use advanced AI systems. Although the Defense Department wanted Anthropic to agree to use its systems for “any lawful purpose,” Anthropic maintained that its systems could not be used for domestic surveillance or to control lethal autonomous weapons. Until last week, the only major AI company actively using services on decentralized networks was Anthropic.

Without guardrails, AI could allow authorities to sift through mountains of digital data to track people’s movements and behavior with unprecedented speed and accuracy, researchers argue.

“This is critical to protecting Americans’ civil liberties,” Altman wrote in a Monday night post announcing the new deal’s language, which he said would limit effective domestic surveillance. “The Department has confirmed that our services are not used by military intelligence agencies (eg, the NSA).”

Katrina Mulligan, head of national security partnerships for OpenAI, added in another post on Tuesday morning that “defense intelligence units are excluded from this deal,” stipulating that they are open to future work with the NSA “if the proper safeguards are in place.”

OpenAI did not respond to a request for comment.

Many observers were unsettled on Tuesday, concerned that pieces of OpenAI’s deal with the Pentagon published by the company remained deliberately vague and provided carveouts for domestic surveillance by various intelligence agencies within the Defense Department. The full text of the agreement has not been publicly released.

“OpenAI said the War Department has contractually agreed not to use ChatGPT in agencies that monitor the American people,” said Brad Carson, a former congressman and Army general counsel who now leads the Washington DC policy group Americans for Responsible Innovation. “They are happy to point out contract language when it benefits them, but they refuse to release this contract provision to the public.”

“I’ve reluctantly come to the conclusion that this provision doesn’t really exist and they’re trying to fake it,” Carson told NBC News. Carson recently founded an AI-focused super PAC that received $20 million from OpenAI competitor Anthropic.

Several legal experts agreed that more transparency about the entire contract and any other important clauses is necessary to properly evaluate the company’s rights.

“We’ve yet to see full agreement to say anything with a reasonable level of confidence,” said Brian McGrail, senior adviser at the Center for AI Safety, a nonprofit research and advocacy group. “It’s definitely a step in the right direction, and I want to give OpenAI some credit.”

The OpenAI deal with the Pentagon was announced shortly after Defense Secretary Pete Hegseth said he would label rival AI company Anthropic, which was in contract negotiations with the Pentagon, a supply chain risk to national security. Anthropic said the designation, which forces the Pentagon and contractors to stop using Anthropic’s services for defense purposes, has never been publicly applied to an American company before.

At an event in Sausalito, California, on Monday, retired General Paul Nakasone, former director of the National Security Agency and US Cyber ​​Command, said the Pentagon should work to integrate technology from all major American AI companies into national defense.

“We need Anthropic, we need Open AI, we need all our big language modeling companies to partner with our government,” Nakasone, who is a member of Open AI’s board of directors, said at a conference sponsored by the Aspen Institute. “I don’t think the supply chain piece is good. The discussions over the weekend and the length of those discussions were tough for me to listen to. As an American citizen, someone who has served in government, I think this is not right, right? It’s not a supply chain risk.”

Although in December it included concessions for the military to use its systems for cyber and missile defense purposes, the Defense Department has long maintained that its AI systems cannot be used for direct use in domestic mass surveillance or autonomous weapons. After a meeting between Anthropic CEO Dario Amodei and Hegseth last Tuesday, the Defense Department gave Anthropic an ultimatum to reach a deal by 5 p.m. Friday.

However, on Thursday, an Anthropic spokesperson told NBC News that the Defense Department’s latest “compromised language is laced with legalese that allows those safeguards to be overridden at will.”

But as Anthropic’s relationship with the Defense Department frayed, OpenAI deepened with Friday’s deal announcement, adding a new round of intrigue to a story that had already captivated much of the tech and defense community. In his post Monday night, Altman said the rush to ink the deal made the negotiations seem “opportunistic and sloppy,” even though OpenAI is “sincerely trying to de-escalate things and avoid a much worse outcome.”

Over the weekend and earlier this week, an army of legal experts reviewed the latest public contract language from OpenAI, trying to determine whether the company’s terms actually included any substantive protections beyond the Defense Department’s “no lawful use” standard.

“I’m confused as to why the Pentagon would accept this language when it tried to nuke Anthropic for asking for something similar,” Charlie Bullock, a senior research fellow at the Institute for Law and AI think tank, wrote in X after the updated language emerged.

Many legal experts argue that every word in a contract carries significant weight, because they say the government reads the terms of the contract as broadly as possible.

“The pattern we see playing out over and over again in these surveillance debates is that the intelligence and national security community ends up interpreting the exemptions in a much broader, much broader fashion than any normal reasonable person would,” McGrail said. “And because so much of it is secret, the public has limited visibility to push back.”

“So could there be some new loophole to exploit here that we’re not guessing? It’s entirely possible,” McGrail added.

Experts have focused on whether the treaty is permanently anchored in today’s notions of legality, as they worry that the government could change the boundaries of “any lawful use” by issuing new executive orders or legal opinions.

Recent debate over the military’s use of AI for domestic surveillance has focused specifically on the government’s ability to use commercially available data in its operations, as other methods of spying on Americans prove more difficult to obtain legal approval.

For years, companies that serve or display ads on phones or laptops have been able to compile targeted data about users, including precise location data, and sell that information to various government agencies to identify individuals’ travel and behavior patterns.

Mulligan, OpenAI’s national security lead, told the X Post on Monday night that the agreement’s “new language reinforces that domestic surveillance, including commercially acquired information, is not permitted under this agreement.”

Sen. has repeatedly warned in recent years that the federal government will buy commercially available data on Americans for surveillance purposes. Ron Wyden, D-Ore., criticized the Pentagon for not acquiescing to Anthropic’s privacy concerns.

“The Department of Defense is throwing a fit at Anthropic asking for minimal ethical protections about how the DOD uses its product,” Wyden said in an emailed statement. “Given AI’s ability to turn disparate pieces of public or commercial data into increasingly revealing profiles of Americans, that’s serious cause for alarm. Location data, web browsing records, and information about mental health, political activities, and religious affiliations are available for pennies on the open market and can target Americans to do things that are perfectly legal.”

“Creating AI profiles of Americans based on that data represents an expansion of mass surveillance that should not be allowed regardless of what the current, outdated laws on the books say.”

Anthropic CEO Amodei has repeatedly criticized the need for firm commitments from the Defense Department not to use AI to monitor Americans because the law does not capture AI’s more powerful ability to analyze or parse vast amounts of data. Recent research has shown that individuals can be identified by today’s AI systems, even if the underlying data is intentionally anonymized.

Protesters of OpenAI’s initial deal with the Pentagon surrounded OpenAI’s San Francisco headquarters this weekend, with chalk messages encouraging employees to question the company’s rules, as uninstalls of OpenAI’s ChatGPT app surged after news of the deal.

Michael Horowitz, former deputy assistant secretary of defense for emerging capabilities and current professor of political science at the University of Pennsylvania, told NBC News that the dispute between the Pentagon and Anthropic goes beyond simple contract terms.

“This dispute reflects a breakdown in trust between Anthropic and the Pentagon, where Anthropic does not trust that the Pentagon will use their technology responsibly, and that the Pentagon does not trust that the Pentagon will allow the Pentagon to use its technology in important national security use cases,” Horowitz said. “Part of it is cultural differences, part of it is politics, part of it is personalities.”

Add Comment