According to Payne, the models escalated to the point of tactical nuclear war in 95 percent of scenarios, noting that nuclear threats to each other were far more likely to escalate than de-escalate ...
AI like ChatGPT uses nuclear escalation in 95% of war game simulations, study finds - ChatGPT, Claude and Gemini threaten ...
Imagine handing the nuclear launch codes to the world’s most advanced artificial intelligence. You’d hope the machine would ...
Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases ...
AI chatbots like ChatGPT and Claude chose nuclear war in 95% of crisis simulations, raising concerns about military AI ...
As the Department of Defense pushes for greater AI integration, researchers said the top models chose the nuclear option in nearly all war simulations.
Top AI models were asked to simulate nuclear war and consistently chose to launch. GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash treated escalation as a rational strategy in 21 high-stakes crisis games ...
At least one AI model in every war game escalated the conflict by threatening to use nuclear weapons, the study found. View ...
An artificial intelligence researcher conducting a war games experiment with three of the world’s most used AI models found ...
War games study finds top AI models (OpenAI, Google, Anthropic) chose nuclear escalation 95% of the time—what it means for AI safety and defense policy.
A recent study put AI models such as Anthropic's Claude, Google Gemini and ChatGPT in war-like scenarios. And almost always the AI models were unafraid to choose the nuclear option ...