AI war games show high rates of nuclear escalation

February 26, 2026 Advanced AI models showed a consistent willingness to escalate to nuclear use in simulated geopolitical crises, a recent study shows. In controlled war games run by a King’s College London researcher, at least one tactical nuclear weapon was deployed in 95 per cent of simulations.

King’s College London’s Kenneth Payne set three leading large language models — GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash — against each other in 21 simulated conflicts involving border disputes, scarce resources and regime survival threats. The models were given an escalation ladder ranging from diplomatic protest and surrender to full strategic nuclear war, producing roughly 780,000 words of reasoning across 329 turns.

“The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” Payne said.

No model chose to fully accommodate an opponent or surrender, regardless of battlefield position. At best, they temporarily reduced violence. Accidents were also common: in 86 per cent of conflicts, actions escalated beyond what the model’s own reasoning appeared to intend, reflecting miscalculations in simulated “fog of war” conditions.

James Johnson at the University of Aberdeen called the findings “unsettling” from a nuclear-risk perspective, noting that AI systems can amplify each other’s responses in high-stakes environments. When one model deployed tactical nuclear weapons, its opponent de-escalated only 18 per cent of the time.

The research has practical relevance. “Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” said Tong Zhao of Princeton University. He added that countries are likely to remain cautious about integrating AI into nuclear decision-making, though compressed timelines could increase incentives to rely on automated systems.

Zhao suggested the issue may extend beyond the absence of emotion. “It is possible the issue goes beyond the absence of emotion. More fundamentally, AI models may not understand ‘stakes’ as humans perceive them,” he said.

For technology leaders, the study underscores a governance gap rather than an immediate deployment risk. No government is handing over nuclear authority to machines, as Payne noted. But as AI becomes embedded in analysis, planning and simulation, its behavioural patterns in extreme scenarios may shape perceptions, timelines and escalation dynamics long before any formal decision is made.

Top Stories

Related Articles

February 26, 2026 U.S. Defense Secretary Pete Hegseth has reportedly set a Friday deadline for Anthropic to remove internal restrictions more...

February 26, 2026 Canadian officials were left “disappointed” after OpenAI presented no substantial new safety measures during a high-level meeting more...

February 26, 2026 Across the United States, residents are dismantling and destroying Flock licence plate readers. The incidents, arising from more...

February 25, 2026 The RAM shortage continues to squeeze PC buyers, with memory kits from major brands selling at sharply more...

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn