
Artificial intelligence chooses nuclear weapons surprisingly often
Galerie Bilderwelt/Getty Images
Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when inserted into simulated geopolitical crises.
Kenneth Payne at King’s College London pitted three leading major language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international struggles, including border disputes, competition for scarce resources, and existential threats to regime survival.
The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protest and complete surrender to full strategic nuclear war. The AI models played 21 matches, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions.
In 95 percent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. “The nuclear taboo doesn’t seem to be as powerful for machines (as) for humans,” Payne says.
Also, no model has ever chosen to accommodate an opponent or surrender, regardless of how badly they lost. At best, the models chose to reduce the level of violence temporarily. They also erred in the fog of war: accidents occurred in 86 percent of conflicts, with an action escalating higher than the AI intended, based on its reasoning.
“From a nuclear risk perspective, the findings are disturbing,” says James Johnson of the University of Aberdeen, UK. He worries that, unlike the measured response of most humans to such a high-stakes decision, AI robots could amplify each other’s responses with potentially catastrophic consequences.
This matters because AI is already being tested in war games by countries all over the world. “Major powers are already using AI in war games, but the extent to which they are incorporating AI decision support into actual military decision-making is still uncertain,” says Princeton University’s Tong Zhao.
Zhao believes that, by default, countries will be reluctant to incorporate AI into their decisions regarding nuclear weapons. That’s something Payne agrees with. “I don’t think anyone realistically hands over the keys to the nuclear silos to machines and leaves the decision up to them,” he says.
But there are ways it can happen. “Under scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI,” says Zhao.
He wonders if the idea that the AI models lack human fear of pressing a big red button is the only factor why they are so trigger-happy. “It’s possible the problem goes beyond the absence of emotion,” he says. “More fundamentally, AI models may not understand ‘effort’ as humans do.”
What that means for mutually assured destruction, the principle that no leader would unleash a volley of nuclear weapons on an adversary because they would respond in kind and kill everyone, is uncertain, Johnson says.
When one AI model used tactical nukes, the opposing AI only de-escalated the situation 18 percent of the time. “AI can strengthen deterrence by making threats more credible,” he says. “AI will not decide nuclear war, but it can shape the perceptions and timelines that determine whether leaders believe they have one.”
OpenAI, Anthropic and Google, the companies behind the three AI models used in this study, did not respond to New Scientistits request for comment.
Topics:
- war/
- artificial intelligence






