A pair of researchers associated with the U.S. Air Force want to give nuclear codes to an artificial intelligence.

Air Force Institute of Technology associate dean Curtis McGiffin and Louisiana Tech Research Institute researcher Adam Lowther, also affiliated with the Air Force, co-wrote an article — with the ominous title "America Needs a 'Dead Hand'" — arguing that the United States needs to develop "an automated strategic response system based on artificial intelligence."

In other words, they want to give an AI the nuclear codes. And yes, as the authors admit, it sure sounds a lot like the "Doomsday Machine" from Stanley Kubrick's 1964 satire "Dr. Strangelove."

The "Dead Hand" referenced in the title refers to the Soviet Union's semiautomated system that would have launched nuclear weapons if certain conditions were met, including the death of the Union's leader.

This time, though, the AI-powered system suggested by Lowther and McGiffin wouldn't even wait for a first strike against the U.S. to occur — it would know what to do ahead of time.

"[I]t may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position," they wrote.

The attack-time compression is the phenomenon that modern technologies, including highly sensitive radar and near instantaneous communication, drastically reduced the time between detection and decision time. The challenge: modern weapon technologies, particularly hypersonic cruise missiles and vehicles, cut the window even further.

"These new technologies are shrinking America’s senior-leader decision time to such a narrow window that it may soon be impossible to effectively detect, decide, and direct nuclear force in time," Lowther and McGiffin argue.

The idea is to use an AI-powered solution to negate any surprise capabilities or advantages of retaliatory strikes of the enemy. It would replace what Lowther and McGiffin describe as a "system of systems, processes and people" that "must inevitably be capable of detecting launches anywhere in the world and have the ability to launch a nuclear strike against an adversary."

Not surprisingly, points out Bulletin of the Atomic Scientists editor Matt Field, handing over the nuclear codes to an AI could have plenty of negative side effects. One of them is automation bias, as Field points out in his piece. People tend to blindly trust what machines are telling them, even favoring automated decision-making over human decision-making.

And then there's the simple fact that the AI doesn't have much data to run on, Field argues. That means that most of the data fed to the AI would be simulated data.

And if "Dr. Strangelove" is anything to go by, as long as all major world powers are made aware of the automated system, it could keep them from attacking the United States. Because without that knowledge, it becomes pointless — and risks total annihilation.

Or as Dr. Strangelove himself puts it: "of course, the whole point of the doomsday machine is lost if you keep it a secret!"

READ MORE: Strangelove redux: US experts propose having AI control nuclear weapons [Bulletin of the Atomic Scientists]

More on nuclear weapons: The Russian Orthodox Church May Stop Blessing Nuclear Weapons


Share This Article