C-Archief 2021-2025
The End of Mutual Assured Destruction? What AI Will Mean for Nuclear Deterrence Sam Winter-Levy and Nikita Lalwani August 7, 2025 A Russian intercontinental ballistic missile system in Moscow, May 2025 A Russian intercontinental ballistic missile system in Moscow, May 2025 Yulia Morozova / Reuters SAM WINTER-LEVY is a Fellow in Technology and International Affairs at the Carnegie Endowment for International Peace.
NIKITA LALWANI is a Nonresident Scholar at the Carnegie Endowment for International Peace. She served as Director for Technology and National Security at the National Security Council and as Senior Adviser to the Director of the CHIPS Program Office at the U.S. Department of Commerce during the Biden administration.
More by Sam Winter-Levy More by Nikita Lalwani Listen Share & Download Print Save The rapid development of artificial intelligence in recent years has led many analysts to suggest that it will upend international politics and the military balance of power. Some have gone so far as to claim, in the words of the technologists Dan Hendrycks, Eric Schmidt, and Alexandr Wang, that advanced AI systems could “establish one state’s complete dominance and control, leaving the fate of rivals subject to its will.”
AI is no doubt a transformative technology, one that will strengthen the economic, political, and military foundations of state power. But the winner of the AI race will not necessarily enjoy unchallenged dominance over its major competitors. The power of nuclear weapons, the most significant invention of the last century, remains a major impediment to the bulldozing change brought by AI. So long as systems of nuclear deterrence remain in place, the economic and military advantages produced by AI will not allow states to fully impose their political preferences on one another. Consider that the U.S. economy is almost 15 times larger than that of Russia, and almost 1,000 times larger than that of North Korea, yet Washington struggles to get Moscow or Pyongyang to do what it wants, in large part because of their nuclear arsenals.
Some analysts have suggested that AI advances could challenge this dynamic. To undermine nuclear deterrence, AI would need to knock down its central pillar: a state’s capacity to respond to a nuclear attack with a devastating nuclear strike of its own, what is known as second-strike capability. AI technology could plausibly make it easier for a state to destroy a rival’s entire nuclear arsenal in one “splendid first strike” by pinpointing the locations of nuclear submarines and mobile launchers. It could also prevent a rival from launching a retaliatory strike by disabling command-and-control networks. And it could strengthen missile defenses such that a rival could no longer credibly threaten retaliation. If AI could in this way help a state escape the prospect of mutual assured destruction, the technology would make that state unrivaled in its capacity to threaten and coerce adversaries—an outcome in line with increasingly popular visions of AI-enabled dominance.
But undermining the nuclear balance of power will not be easy. Emerging technologies still face very real constraints in the nuclear domain. Even the most sophisticated AI-powered targeting and sensor systems may struggle to locate a mobile nuclear launcher hidden under a bridge, isolate the signatures of a nuclear-armed submarine from the background noise of the ocean, and orchestrate the simultaneous destruction of hundreds of targets on land, air, and sea—with zero room for error. And competitors will respond to their adversaries’ use of new technology with moves of their own to defend their systems, as they have at every turn since the dawn of the atomic age.
Subscribe to Foreign Affairs This Week Our editors’ top picks, delivered free to your inbox every Friday.
Enter your email here. Sign Up
- Note that when you provide your email address, the Foreign Affairs Privacy Policy and Terms of Use will apply to your newsletter subscription.
Yet even if it does not challenge nuclear deterrence, AI may encourage mistrust and dangerous actions among nuclear-armed states. Many of the steps that governments could take to protect and toughen their second-strike capabilities risk alarming rivals, potentially spurring expensive and dangerous arms races. It also remains possible that AI systems could cross a crucial threshold and exhibit extremely rapid improvements in capabilities. Were that to happen, their advantages to the country that possesses them could become more pronounced and difficult for rivals to contend with. Policymakers should monitor for such a scenario and facilitate regular communication between AI and nuclear experts. At the same time, they should take steps to reduce the probability of accidents and escalation, including assessing nuclear systems for AI-related vulnerabilities and maintaining channels of communication between nuclear powers. Such steps will help ensure that nuclear stability—and not just nuclear deterrence—endures in the age of AI.
FIRST STRIKE Nuclear deterrence depends, most fundamentally, on states’ possessing the ability to retaliate after absorbing a nuclear attack: so long as two nuclear powers credibly maintain a second-strike capability that can inflict unacceptable damage on their adversary, a first strike is suicidal. This understanding has for decades sustained a relatively stable equilibrium. But second-strike capabilities are not invulnerable. States can eliminate delivery platforms, such as road-mobile missile launchers and nuclear submarines, provided that they can find them. The difficulty of finding and disabling these platforms is one of the central obstacles to launching a splendid first strike. The sheer size of China, Russia, the United States, the Atlantic Ocean, and the Pacific Ocean—the most important domains for nuclear competition today—makes such a strike hard to accomplish.
The emergence of powerful AI systems, however, could solve that problem. Capable of processing and analyzing vast amounts of data, a military equipped with such technologies could better target the nuclear assets of its rivals. Consider ground-launched mobile missiles, one of the platforms that underpin Russian and Chinese second-strike capabilities. These missiles, which are carried on vehicles that can hide under camouflage netting, bridges, or tunnels and drive from one concealed location to another, are probably the most difficult element of Russian and Chinese nuclear forces to eliminate. (Silo-based ballistic missiles, by contrast, are much more vulnerable to attack.) The improved speed and scale of AI-empowered intelligence processing may make it easier to conduct operations against these vehicles. AI systems can scour and integrate huge amounts of data from satellites, reconnaissance aircraft, signals intelligence intercepts, stealth drones, ground-based sensors, and human intelligence to more effectively find and track mobile nuclear forces.
When it comes to the sea, the potential convergence of AI with sensing technologies might make the oceans “transparent,” allowing governments to track ballistic missile submarines in real time. That is a particular concern for the United States, which keeps a much higher percentage of its warheads on submarines than Russia or China does. AI could make it easier to track submarines by automating pattern recognition from multiple types of sensors across massive ocean areas and over long durations. It could also help a state hack into the systems its adversaries use to track their own weapons.
Yet even with the assistance of AI, states will not be absolutely sure that a splendid first strike can knock out a rival’s capacity to retaliate. On land, for instance, China and Russia could respond to improvements in U.S. tracking systems with their own countermeasures. They could invest in antisatellite weapons and jamming capabilities. They could adopt old-fashioned low-tech solutions, such as covering roads with netting or constructing decoys, to increase the number of targets an attacker would need to strike. They could order their launchers to emit fewer signals, making it harder for the United States to track them. They could modify the launchers to move faster, widening the target area U.S. strikes would have to hit. They could even use their own AI systems to inject false information into channels monitored by the U.S. intelligence community.
In the maritime domain, too, AI is unlikely to make the sea fully transparent. Any system will struggle to continuously identify, track, and monitor multiple targets over long ranges and amid ocean background noise, especially as submarines get quieter and oceans noisier. Submarines remain extraordinarily difficult to detect when submerged at depth and operating at low speeds, due to how sound moves underwater, shifting ocean conditions, and the inherent noisiness of the marine environment. In the seas, false alarms are frequent; reliable contact is rare. And at sea, as on land, major powers can tip the scales in their favor through various countermeasures: they can jam signals, manipulate sensor data, use undersea sensors and uncrewed vehicles to detect adversary assets, and operate their own submarines in protected bastions close to their home shores. Detection will thus remain a matter of probability, even with the introduction of AI—and states are unlikely to want to risk a splendid first strike on anything less than a safe bet.
COMMAND AND CONTROL Beyond making it easier to find and destroy an adversary’s nuclear weapons, AI could plausibly threaten the nuclear command-and-control systems that would be needed to launch a retaliatory strike. Command-and-control systems are responsible for detecting attacks, reporting them to the relevant authority, and transmitting retaliation orders to nuclear forces. These systems must be able to identify a wide range of missiles; assess d