Nuclear weapons fundamentally reshaped the international political system. Any crisis between states possessing such weapons constitutes a nuclear crisis as the mere possession of these potentially world-ending weapons influences the strategic calculations of political decision-makers. Recently, technological advancements such as generative artificial intelligence (AI), particularly Large Language Models (LLMs), have emerged as a transformative force in global security, akin to the paradigm shift prompted by nuclear weapons.
Artificial intelligence presents the potential to transform the pre-existing global security order, and fundamentally influence decision-making processes, particularly in nuclear deterrence. This article examines how advances in AI are reshaping traditional security concerns related to nuclear decision-making. Specifically discussing how emerging technologies such as Deepfakes, media that has been manipulated using artificial intelligence, may influence processes fundamental to nuclear deterrence.
The concept of nuclear deterrence, essentially whether or not to launch nuclear weapons, rests on a state’s ability to persuade an adversary that the retaliatory costs of response would outweigh any benefits of launching a preemptive nuclear strike. These cost/benefit calculations are not static; therefore, the efficacy of nuclear deterrence varies according to factors such as a state’s capabilities and the given conflict environment.
Since deterrence relies on credibility, rationality, effective communication and accurate perception, the increasing prevalence of AI-generated disinformation poses a unique and serious risk. AI’s potential for misinformation, hallucinations, and opacity in decision-making could significantly alter the stability of nuclear deterrence and the effectiveness of nuclear command, control, and communications (NC3) systems.
The Role of NC3 in Nuclear Deterrence
Nuclear command, control, and communications (NC3) refers to the system through which a state’s nuclear decision-making processes are overseen and implemented. These systems play a critical role in threat detection and warning, adaptive response planning, and execution of nuclear launch forces.
Successful nuclear deterrence hinges on an adversary’s confidence that a nuclear response to aggression is both credible and unavoidable. This calculation is based on rational decision-making, timely intelligence, and reliable information flow. Successful deterrence functionality also relies on satisfying certain key actor assumptions, namely: rationality, credibility, effective communication, and accurate perception.
Due to nuclear deterrence’s abstract nature, the role and potential integration of AI into NC3 systems potentially provide a significant disruption to these fundamental aspects. In particular, the assumption of rationality is particularly threatened. Pre-conditions for rationality to function concerning nuclear deterrence cost-benefit calculations include, adequate time and informational availability, operational environmental awareness, knowledge of adversarial intent and redlines, and informational trust.
Deepfakes and the Erosion of Rationality
One consequence of the increased usage and availability of generative AI has been the explosion of synthetic media such as Deepfake images and videos which have been generated or manipulated by this technology. Deepfake imagery and videos have increasingly become a weapon of disinformation actors, for example, the 2023 incident of a Deepfake photograph purported to depict an explosion near the United States Pentagon building, created an online viral panic with the real-world consequence of negatively impacting U.S. stock prices.
More drastically, these Deepfake technologies have begun to increasingly move from the civilian realm to the battlefield, due to their potential battlefield advantages in disrupting adversary’s intelligence collection or impersonating commanders’ orders. One notable example of this development was a Deepfake video of Ukrainian president Volodymyr Zelensky apparently calling for Ukrainian troops to immediately lay down arms and surrender to occupying Russian forces.
Another example was the deployment of a Deepfake of Russian President Vladimir Putin declaring martial law on hacked Russian media channels, which resulted in some Russian citizens following these fictitious evacuation orders.
Deepfakes thus represent potential ‘Weapons of Mass Distortion’, that could undermine confidence in informational reliability and communication that is vital to NC3 systems’ upholding of nuclear deterrence. For example, beyond the obvious capability of Deepfake technology to produce a falsified video of a world leader announcing a nuclear launch, it could also be used to undermine NC3 algorithms classifications of situational information.
In addition to the concerns surrounding Deepfakes, AI systems themselves offer threats to rationality assumptions due to their tendency to “hallucinate”, confidently presenting incorrect outputs as factual realities, potentially resulting in false positives regarding nuclear threat detection and surveillance. Additionally, the “black box” lack of transparency in AI-generated conclusions may plague nuclear decision-makers with a lack of informational reliability in how the AI decision-making process functions in regard to its outputs.
The Debate Surrounding AI in Nuclear Arms Control and Disarmament
Proponents of AI integration into NC3 systems argue that not all AI applications in NC3 pose a risk, as some may actually strengthen nuclear security and disarmament efforts. Such potential benefits are often cited regarding intelligence collection, monitoring, and data evaluation of the state’s nuclear disarmament measures.
In relation to disarmament Nuclear disarmament, efforts to decrease nuclear weapons stockpiles are enacted through international legal treaties such as the Comprehensive Nuclear-Test-Ban Treaty (CTBT) and the Treaty on the Non-Proliferation of Nuclear Weapons (NPT). Proponents argue AI provides an opportunity to speed up and enhance these efforts, particularly regarding monitoring and detection of nuclear weapons testing and usage, for example through mass analysis of seismographic data.
Additionally, there is a historical precedent of AI integration within Cold War weapons systems in both the US and USSR. For example, early warning systems or the Soviet Union’s “Dead Hand” system designed to act as a failsafe in the event of the destruction of Soviet leadership due to a preemptive U.S. strike.
However, these AI systems were fundamentally different from today’s generative systems, with states and individuals recognizing the inherent limitations in automation and propensity toward false alarms. Such systems were designed with a clear understanding of the limitations of AI automation and ensured that human oversight was firmly recognized as an indispensable component of NC3 systems.
Opponents of increased AI integration often point to the numerous “near misses” where human judgment and intuitive divergence from technical protocols prevented accidental nuclear weapons usage. For example, in 1983 Soviet Officer Petrov suspected a system malfunction and defied orders to launch missiles following faulty reports from an early warning system. Additionally, in 1995 Russian systems initially misread a US and Norwegian missile launch as a preemptive US attack upon the Russian Federation, with Russian President Boris Yeltsin ultimately refraining from the option to launch, what was assumed to be, a retaliatory strike.
As previously highlighted, AI’s “black box” nature and resulting lack of transparency in internal decision-making processes, may further raise the risks of such accidental escalations. Given that today’s nuclear decision-makers, predominantly older individuals with Cold-War Era backgrounds, are likely to have a decreased understanding of these systems’ functionalities to be able to identify errors in a crisis, than these less advanced systems of their predecessors.
Another growing concern that opponents point to is the claim that increased AI integration may introduce new opportunities for cyber threats, such as hacking, to NC3 systems. Experts warn that increasingly AI-integrated systems are more prone to cyber vulnerabilities than traditional NC3 Structures, creating new opportunities for data and operational manipulation, with cyber defense development lagging behind threat advent.
Currently, the largest cyber threat to these systems is so-called “Integrity attacks”, which aim to lead AI systems to faulty decisions by poisoning data used to train these systems. Additionally, the potential for other forms of hacking, such as Denial of Service (DoS) and ransomware, raises concerns about potentially disrupting logistics and availability of NC3 systems, crucially undermining the time-sensitive component of nuclear deterrence.
Future Considerations and Policy Safeguards
Recognizing these risks, nuclear-armed states have thus far maintained the policy of keeping humans “in the loop” in nuclear decision-making. The U.S. 2022 Nuclear Posture Review, along with policy statements from France, the UK, China, and other nuclear states in 2024, reaffirm the necessity of human oversight in NC3 systems.
However, as AI capabilities advance, continued vigilance will be required to ensure AI integration does not erode essential safeguards. Additionally, it is essential to remember that the ways in which nuclear weapons states aim to integrate AI within NC3 systems will differ due to differential contexts regarding nuclear doctrines, military cultures, and ethical considerations.
Ensuring strict oversight, regulatory measures, and enhanced cybersecurity protections will be essential to maintaining stability. Governments must establish clear protocols for AI use in nuclear decision-making, ensuring that AI remains an advisory tool rather than an autonomous decision-maker. Additionally, international cooperation will be necessary to implement AI-specific nuclear risk-reduction measures, similar to past bi- and multi-lateral arms control agreements.
↓ Image Attributions
“Able crossroads” by the U.S. Navy // U.S. government work in public domain