The impact of artificial intelligence on nuclear decision making


Featured, Global, Headlines, IPS UN: Inside the greenhouse, Nuclear disarmament, Nuclear energy – Nuclear weapons, TerraViva United Nations

The impact of artificial intelligence on nuclear decision making

Will AI fuel a new era of nuclear energy? Credit: Unsplash/Taylor Vick In a data center (above), servers are high-performance computers that process and store data.

Meanwhile, the United Nations has taken a firm stance that decisions on the use of nuclear weapons should rest with humans, not machines, warning that the integration of Artificial Intelligence (AI) into nuclear command, control and communications (NC3) presents an unacceptable risk to global security.

UNITED NATIONS, Mar 6, 2026 (IPS) – As artificial intelligence (AI) threatens to dominate all aspects of human life (including political, economic, social and cultural), there is also the danger of potential militarization of AI.

The integration of AI into nuclear command, control and communications (NC3) systems, as well as its use in military decision-making, introduces serious and unprecedented risks to global security, according to a report.

Key negative effects include the acceleration of decision-making to “machine speed” (leaving little time for human judgment), increased vulnerability to cyberattacks, and the erosion of strategic stability.

According to the Bulletin of the Atomic Scientists, the command and control of nuclear weapons is a delicate and complicated system, designed to avoid errors while ensuring reliability under high-pressure conditions.

In environments where large amounts of data shape high-stakes outcomes, artificial intelligence has become a natural consideration.

“Integrating a rapidly evolving technology raises fundamental questions about accountability, data quality, and system reliability. When a single error could have irreversible consequences, how can trust be built around the integration of machine learning into systems that have long relied on human judgment and oversight?”

“What barriers should be maintained? Where are the opportunities for international collaboration and consensus?”

Tariq Rauf, former head of Security Policy and Verification at the Vienna-based International Atomic Energy Agency (IAEA), told IPS that the role and integration of Artificial Generative Intelligence (AGI) raises some of the most important questions of our technological era.

Integrating AGI into nuclear command, control, and communications (NC3) systems is not simply an engineering challenge: it is a civilizational challenge.

The problem of machine speed

Perhaps the most alarming aspect of AGI integration into NC3 systems, he noted, is the compression of decision-making timelines to “machine speed.” Nuclear strategy has historically depended on deliberate human judgment: the ability of decision-makers to pause, evaluate ambiguous data, consult advisors, and opt for restraint even under pressure or attack.

AGI systems, on the other hand, are designed to process and respond at speeds that no human can match. In a crisis, this creates a dangerous paradox: the same speed that makes AGI attractive also makes meaningful human oversight nearly impossible.

“If an AGI system mistakenly identifies a sensor anomaly as an incoming missile – something that has happened before with human-operated systems, as illustrated by the 1983 Soviet false alarm incident – the window for correction could be reduced from minutes to seconds.”

The margin of error in nuclear decision-making has always been uncomfortably narrow; AGI risks eliminating it entirely, Rauf said.

Data quality and system reliability

Data quality and integrity are key concerns regarding AGI. Machine learning systems are only as reliable as the data they are trained on, he argued.

“Nuclear environments present unique ultra-complex challenges: they involve rare, high-risk events with limited historical data, adversary actors that can deliberately feed misinformation into sensor networks, and geopolitical contexts that change faster than training data sets can capture.”

An AGI system that confidently acts on corrupted or misrepresented data in a nuclear context could trigger an escalation based on a fiction. Worse, the opacity of many machine learning models (the so-called “black box” problem) means that even system designers may not be able to explain why a particular result was generated, much less correct it in real time, Rauf stated.

Vladislav Chernavskikh, a researcher at the Weapons of Mass Destruction Program at the Stockholm International Peace Research Institute (SIPRI), told IPS that current state approaches to the AI-nuclear nexus already broadly converge on the principle of retaining human control in nuclear decision-making, but there is no consensus on how it should be defined or operationalized.

A formal recognition of this principle by nuclear weapon states and the elaboration of what constitutes human control in this context and how it can manifest itself in the area of ​​nuclear weapons can be one of the first steps towards minimizing risks, he stated.

At the AI ​​Impact Summit in New Delhi last month, UN Secretary-General Antonio Guterres said the future of AI cannot be decided by a handful of countries and the whims of a few billionaires.

Last year, the General Assembly took two decisive actions, he said.

First, by creating an Independent International Scientific Panel on Artificial Intelligence and, second, by launching a Global Dialogue on AI Governance within the UN, where all countries, together with the private sector, academia and civil society, can have a voice.

He told summit participants that real impact means technology that improves lives and protects the planet. And he asked them to build an AI for everyone, with dignity as the default setting.

UN spokesman Stéphane Dujarric told reporters last month that the Secretary-General is not calling for the United Nations to govern over AI. It calls for (and has put in place) an architecture with the help of Member States to try to ensure that everyone gets a seat at the table.

And as he said: “AI will impact us all and already has. It is vital that those countries that may not have the technology also have a voice and that science and justice are placed at the center of AI.”

Responsibility and accountability

In further analysis, Rauf said that when AGI recommendations or autonomous actions contribute to catastrophic results, the question of accountability becomes deeply problematic.

Traditional chains of command assign clear human responsibility at each decision point. The integration of AGI fractures this clarity. Is the software developer, the military commander, the government that implemented the system, or the algorithm itself responsible for a calculation error? asked.

The absence of clear accountability frameworks is not just a legal or ethical problem: it is strategic, because both adversaries and allies need to understand who is in control and what decision logic is being applied.

Cyber ​​attack vulnerability

AGI-enhanced or AGI-dependent NC3 systems also expand the attack surface for adversaries. Sophisticated cyberattacks, including adversarial attacks designed to manipulate AGI results, could potentially spoof or blind these systems in ways that are difficult to detect until it is too late. The integration of AGI thus creates new vectors of destabilization that did not exist in previous nuclear architectures, Rauf said.

The case of international collaboration

Despite these alarming challenges, international collaboration could be a potential avenue to manage risk. Confidence-building measures, shared technical standards, and “enforceable” bilateral or multilateral agreements on the limits of AGI autonomy in nuclear systems could help preserve strategic stability.

The history of arms control, Rauf said, shows that even adversaries can agree on rules that serve mutual survival interests. Extending that tradition to AGI-enabled NC3 systems is urgently needed before technology completely overtakes diplomacy.

“The integration of AGI into nuclear systems could be technically inevitable. If managed wisely it is a political and moral choice that remains wide open and seems beyond the intellectual, moral and ethical processing capabilities of today’s civilian and military ‘leaders’,” Rauf stated.

This article was presented by IPS NORAM, in collaboration with INPS Japan and Soka Gakkai International, with consultative status with the United Nations Economic and Social Council (ECOSOC).

IPS UN Office Report

$images_for_story = ips_images_for_story(); echo $images_for_story; // story photos to display in sidebar ?>


Add Comment