
In short
- The research reveals fragmented, untested plans for managing large-scale AI disruptions.
- RAND pushed for the creation of rapid AI analysis tools and stronger coordination protocols.
- The findings warned that future AI threats could emerge from existing systems.
What will it look like when artificial intelligence emerges – not in movies, but in the real world?
A new RAND Corporation simulation offered a glimpse, showing autonomous AI agents hijacking digital systems, killing people and crippling critical infrastructure before anyone realized what was happening.
The exercise, detailed in a report published Wednesday, warned that an AI-driven cyber crisis could overwhelm U.S. defense and decision-making systems faster than leaders could respond.
Gregory Smith, a RAND policy analyst who co-authored the report, shared Declutter that the exercise revealed deep uncertainty about how governments would even diagnose such an event.
“I think what we brought up in the attribution question is that player responses varied depending on who they thought was behind the attack,” Smith said. “Actions that made sense for a nation-state were often incompatible with those for a rogue AI. An attack by a nation-state meant responding to an act that killed Americans. A rogue AI required global cooperation. Knowing what that was became critical because once players chose a path, it was difficult to turn back.”
Because participants couldn’t determine whether the attack came from a nation-state, terrorists, or an autonomous AI, they pursued “very different and mutually incompatible responses,” RAND found.
The robot uprising
Rogue AI has long been a staple of science fiction, from 2001: A Space Odyssey to Wargames and The Terminator. But the idea has gone from fantasy to a real policy issue. Physicists and AI researchers have argued that once machines can redesign themselves, the question is not whether they will outpace us, but how we maintain control.
Led by RAND’s Center for the Geopolitics of Artificial General Intelligence, the “Robot Insurgency” exercise simulated how senior U.S. officials might respond to a cyberattack on Los Angeles that killed 26 people and crippled key systems.
It was run as a two-hour tabletop simulation on RAND’s Infinite Potential platform and cast current and former officials, RAND analysts and outside experts as members of the National Security Council Principals Committee.
Guided by a facilitator acting as the National Security Advisor, participants first debated the answers under uncertainty about the attacker’s identity, and after learning that autonomous AI agents were behind the attack.
According to Michael Vermeer, a senior physical scientist at RAND and co-author of the report, the scenario was deliberately designed to mirror a real-life crisis in which it would not be immediately clear whether an AI was responsible.
“We deliberately kept things ambiguous to simulate what a real situation would look like,” he said. “An attack is happening, and you don’t know right away — unless the attacker announces it — where the attack is coming from or why. Some people would immediately reject that, others might accept it, and the goal was to introduce that ambiguity to decision makers.”
The report found that attribution – determining who or what caused the attack – was the most critical factor in shaping policy responses. Without clear attribution, RAND concluded, officials risked pursuing incompatible strategies.
The survey also found that participants struggled with how to communicate with the public in such a crisis.
“There will have to be real thought among decision makers about how our communications are going to influence the public to think or act in a certain way,” Vermeer said. Smith added that these conversations would unfold as the communications networks themselves failed under cyberattacks.
Backcasting to the future
The RAND team designed the exercise as a form of “backcasting,” using a fictional scenario to identify what officials could strengthen today.
“Water, power and internet systems are still vulnerable,” says Smith. “If you can strengthen them, you can make it easier to coordinate and respond – to secure critical infrastructure, keep it running, and maintain public health and safety.”
“That’s what I struggle with when I think about AI loss of control or cyber incidents,” Vermeer added. “What really matters is when it starts to impact the physical world. Cyber-physical interactions, such as robots causing real-world effects, felt essential to include in the scenario.”
RAND’s exercise concluded that the US lacked the analytical tools, infrastructure resilience and crisis scenarios to deal with an AI-driven cyber disaster. The report urged investments in rapid AI forensic capabilities, secure communications networks and pre-established backchannels with foreign governments – even adversaries – to prevent escalation in a future attack.
The most dangerous thing about a rogue AI may not be the code, but our confusion when it strikes.
Generally intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.

