As the US military expands its use of AI tools to identify targets for airstrikes in Iran, members of Congress are calling for watchdogs and greater oversight of the technology’s use in warfare.
Two people with knowledge of the matter, who requested anonymity to discuss sensitive matters, confirmed that the military is using AI systems from data analytics company Palantir to identify potential targets in the ongoing attacks. Palantir’s use of the software, which relies in part on Anthropic’s cloud AI systems, has clashed with human leadership over limits on AI use as Defense Secretary Pete Hegseth aims to put artificial intelligence at the heart of America’s military operations.
Still, as AI plays a wider role on the battlefield, lawmakers are seeking greater focus on safeguards governing its use and greater transparency about how much control is given to the technology.
“We need a thorough, unbiased review to determine whether AI has already caused harm or danger in the war with Iran,” Rep. Jill Tokuda, D-Hawaii, a member of the House Armed Services Committee, told NBC News in response to questions about the use and reliability of AI in military contexts. “Human judgment must remain at the center of life or death decisions.”
The Department of Defense and major AI companies such as OpenAI and Anthropic have publicly stated that current AI systems are unable to kill without human sigoff. But there remains concern that relying on AI for parts of its operations or decision-making could lead to mistakes in military operations.
The Pentagon’s chief spokesman, Sean Parnell, said on Feb. In a post on X on the 26th, the military “does not want to use AI to develop autonomous weapons that operate without human involvement.”
The Department of Defense did not respond to questions about how it balances the use of AI with reducing human workloads, whether it is accurate to review analytics and targeting suggestions.
Lawmakers and independent experts who spoke to NBC News cautioned against the military’s use of such devices, calling for clear safeguards to ensure that humans are involved in life-or-death decisions on the battlefield.
“AI tools are not 100% reliable — they can fail in subtle ways and yet operators continue to over-trust them,” said Rep. Sarah Jacobs, D-Calif., a member of the House Armed Services Committee.
“We have a responsibility to enforce strict safeguards on the military’s use of AI and ensure that a human is in the loop on every decision to use lethal force, because the cost of getting it wrong can be devastating to civilians and the service members who carry out these operations,” he said.
Anthropic’s cloud was a critical component of Palantir’s Maven intelligence analysis program used in the US operation to capture Venezuelan President Nicolas. Maduro. News of Cloud’s role in recent military actions was first reported by The Wall Street Journal and The Washington Post.
But that role has been complicated by Anthropic’s clash with Hegseth after the company tried to prevent the company from using its AI for domestic surveillance and autonomous lethal weapons. Last week, the Department of Defense labeled Anthropic a national security threat, threatening to remove it from military use in the coming months. Anthropic filed suit to fight that designation.
Anthropic declined to comment. Palantir did not respond to a request for comment.
In a video posted to X on Wednesday, US Central Command leader Adm. Brad Cooper has acknowledged that AI is an important tool in helping the US select targets in Iran.
“Our warfighters are using a variety of advanced AI tools. These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react,” he said.
“Humans always make the final decisions about what to shoot and what not to shoot and when to shoot, but advanced AI tools can turn processes that take hours and sometimes days into seconds.”
The Trump administration has publicly embraced the use of technology across the military and government.
Rep. Pat Harrigan, RN.C., said AI is already critical to quickly processing military intelligence, including on Iran.
“AI is a tool that helps our warfighters process vast amounts of data faster than any human can, and Operation Epic Fury, where we saw more than 2,000 targets with remarkable accuracy, is a testament to how these capabilities can be used responsibly and effectively,” Harrigan said in a statement to the NBC service.
“But no AI system can replace the judgment, training and experience of the American warfighter. A human in the loop is not a formality, it’s a necessity, and nothing in how our military operates suggests otherwise,” he said.
None of the lawmakers contacted by NBC News said AI should be completely removed from military use, with some saying more oversight is needed.
A member of the Senate Armed Services Committee, Sen. Elissa Slatkin, D-Mich., said the Defense Department has not done enough to clarify how well humans are vetting AI-assisted or generated military intelligence.
“It’s really up to humans, and in this case the secretary of defense, to make sure there’s no human redundancy for the foreseeable future, and we’re not confident that,” he said.
Sen. Mark Warner, D-Va., the top Democrat on the Senate Intelligence Committee, has expressed concern about the military’s use of AI to help identify targets and unanswered questions about how the new technology is being used. “It needs to be addressed,” he told NBC News.
OpenAI and Anthropic, both of which have worked with the US military, say even their most sophisticated systems are flawed, and the world’s top AI researchers admit they don’t fully understand how major AI systems work.
In an interview with NBC last month, Anthropic CEO Dario Amodei said: “I can’t tell you that there’s a 100% chance that even the systems we build are completely reliable.”
A major OpenAI study published in September found that all major AI chatbots, which rely on systems known as large language models, “fudge,” or periodically fabricate answers.
Sen. Kirsten Gillibrand, DN.Y., called for clearer rules on how the military can use AI.
“The Trump administration has already proven it is willing to undermine American law to prosecute an unpopular war,” he told NBC News. “There is little reason to believe that DOD would have any responsibility for using AI without clear safeguards.”
Mark Beal, head of government affairs at the AI Policy Network, a Washington D.C. think tank and director of AI strategy and policy at the Pentagon from 2018 to 2020, said that while AI can streamline the process of deciding where to strike, humans still need to fully vet targets.
“There are a lot of steps before pulling the trigger. AI systems are being deployed very effectively to speed up existing workflows and enable commanders and analysts and planners to make better and faster decisions,” he added. “But when it comes to actually deploying weapons systems, this technology is not ready yet.”
“As these systems get really good and other adversaries start using them, there will be more pressure to reduce the review of AI products to operate at useful and effective speeds,” Bell said. “We have to figure out how to solve this reliability problem before we get there. Whatever you think about lethal autonomous weapons, it’s in the whole world’s interest to make them safe and effective.”
Heidi Khlaf, chief scientist at the AI Now Institute, a nonprofit that advocates for the ethical use of technology, said she worries that relying on AI to quickly process information for life-or-death decisions is a way for militaries to avoid accountability for mistakes.
“When you consider how imprecise these models are it’s very dangerous that ‘velocity’ is being strategically marketed here when it’s really indiscriminate target cover,” Khlaf said.






