AI May Be Giving US a Deadly Edge in Iran War – But There Are Risks | Science, climate and technology news


Forget science fiction. The era of AI in combat is here.

Israel used AI systems Gaza Helps flag potential targets and prioritize operations.

The United States military has reportedly used the anthropic model cloud in its time An operation to kidnap Nicolás Maduro from Venezuela.

And even later Anthropic ran into difficulties with the US administration Exactly how AI To be used in war, the US military still clearly uses the cloud in its attacks Iran.

Iran latest: Trump slams Stormer over UK stance

Experts say missiles flying over Tehran today are being targeted by systems powered by AI.

“AI is changing the nature of modern warfare in the 21st century. It’s hard to overstate the impact it will have and will have,” says Craig Jones, senior lecturer in political geography at Newcastle University.

“It’s a potentially terrifying scenario.”

Scary or not, there seems to be no going back. If you want a sense of the importance the US military places on AI, a good place to start is Defense Secretary Pete Hegseth’s memo earlier this year to all senior military leaders known as the Secretary of War.

“I direct the Department of War to accelerate America’s military AI dominance by becoming an ‘AI-first’ warfighting force in all units from the front to the rear,” Mr. Hegseth wrote.

It’s not an experiment, it’s a mandate – to adopt AI quickly and at scale.

Or as Hegseth says: “Speed ​​wins”.

It is possible that the US is already using AI to inform its missile strikes. Image: AP/CentCom
Image:
It is possible that the US is already using AI to inform its missile strikes. Image: AP/CentCom

Yet the situation in question is not the first that comes to mind.

Yes, autonomy is increasing in some areas. in UkraineFor example, there are drones capable of continuing operations even after losing contact with the human operator.

But we’re not at the point where autonomous killer robots stalk the battlefield.

“We’re not in the Terminator age yet,” says David Leslie, a professor of ethics, technology and society at Queen Mary University of London.

The systems into which AI is being embedded are advisors, known as “decision support systems” in military parlance, that identify targets, indicate threats and indicate priorities.

AI systems can sift through satellite imagery, intercepted communications, logistics data and social media streams — thousands, even hundreds of thousands of inputs — and pull surface models much faster than any human team.

They help cut through the fog of war, allowing commanders to focus resources on what matters most, while being more precise than exhausted, overstressed human soldiers.

This means they are not just a tool, says Dr Jones, but a new way of making decisions.

“AI, as we see it in our lives, is like infrastructure,” he says. “It’s built into the system.”

“We have the ability to collect that surveillance, which we’ve been doing for a few years.

“But now AI has the stability to act on it and kill Iran’s leader and take out serious opponents and serious enemies and find them in a way that’s unlikely they’ve been found before.”

‘A very persuasive tool’

Professor Leslie agrees that the new systems are extremely efficient from a military perspective.

“Speed ​​running is driving this rise,” he says. “Speeding up decision-making cycles brings the military advantage of lethality.”

A key feature of decision support systems is that AI does not push a button. A human does. That’s a central promise in discussions about military AI. There is always a “human in the loop”.

As OpenAIA company that makes ChatGPTAfter announcing a partnership to supply the Pentagon with AI, put this: “We’ve cleared forward-assigned OpenAI engineers who will assist the government with cleared safety and alignment researchers in the loop.”

OpenAI has emphasized that it has secured an agreement with the Pentagon that its technology cannot be used in ways that cross three “red lines”: mass domestic surveillance, direct autonomous weapons systems and high-stakes automated decision-making.

But even with a human in the loop, one question remains.

Read more:
AI is ready to go nuclear in war games, study finds
Cloud Opus 4.6: This AI has just passed the ‘Vending Machine Test’


US-Israeli airstrikes destroy parts of Iranian city

When you’re fighting a war, can a person really review every decision from an AI? When time is compressed and information is incomplete, what does “human supervision” mean?

“Humans are technologically in the loop,” says Dr Jones.

“In my opinion, that doesn’t mean they’re in the loop enough to have effective decision-making power and oversight of exactly what happened. AI … is a very persuasive tool for people making decisions.”

Or as Professor Leslie puts it: “We’re really running a risk of potential scale… because of the rubber stamping, the speed, you don’t have active human, critical human engagement to assess the recommendations that are coming out of these systems.”

And then there’s the question of AI’s own error.

Read more:
UK to deploy HMS Dragon in Cyprus, PM confirms
Iran Q&A: Why Trump might try to declare a quick victory

Testing by Sky News found that if the chicken doesn’t look as expected, neither the cloud nor ChatGPT can tell how many legs the chicken has.

What’s more, the AI ​​insisted it was right even when it was clearly wrong.

The example comes from a paper that describes dozens of examples of similar failures. “This is not a one-off example of animal legs,” said lead author Anh Vo.


Artificial intelligence is accelerating – but how fast is too fast? Roland Manthorpe looks at the latest research.

“The problem is common in the types of data and functions,” Vo added.

The reason is that AI doesn’t really see the world in a human way – they just guess what’s most likely based on past data.

Most of the time, that kind of statistical reasoning is surprisingly effective. The world is as predictable as the probabilities work.

But some environments are by their very nature unpredictable and high stakes.

We are testing the boundaries of this technology to predict the most unforgiving situations.

Add Comment