Here are some very true facts:
- Computers are in charge of more of our military capabilities than ever before, including controlling autonomous vehicles and detection systems.
- They are also in charge of our nuclear arsenals
- Artificial intelligence is getting more autonomous
- Experts warn that AI-controlled weapons could threaten us all in a not-so-distant future
You might be tempted to put these pieces together and assume that AI might autonomously start a nuclear war. This is the subject of a new paper and article published today by the RAND Corporation, a nonprofit thinktank that researches national security as part of its Security 2040 initiative.
But AI won't necessarily cause a nuclear war; no matter what AI fear-mongerer Elon Musk tweets out, artificial intelligence will only trigger a nuclear war if we decide to build artificial intelligence that can start nuclear wars.
The RAND Corporation hosted a series of panels with mysterious, unnamed experts in the realms of national security, nuclear weaponry, and artificial intelligence to speculate and theorize on how AI might advance in the coming years and what that means for nuclear war.
Much of the article talks about hyper-intelligent computers that would transform when a nation decides to launch its nuclear missiles. The researchers imagine algorithms that can track intercontinental ballistic missiles, launch its nukes before they're destroyed in an incoming attack, or even deploy retaliatory strikes before an enemy's nukes have even left their lairs.
It also mentions that AI could suggest when human operators should launch missiles, while also arguing that future generations will be more willing to take that human operator out of the equation, leaving those life-or-death decisions to AI that has been trained to make them.
Oh, and also the researchers say that all these systems will be buggy as hell while people work out the kinks. Because what’s the harm in a little trial and error when nukes are involved?
As we become more dependent on AI for its military applications, we might need to reconsider how implementing such systems could affect nuclear powers worldwide, many of which find themselves in a complicated web of alliances and political rivalries.
If you are despairing at the prospect of a computer-dominated military, the study authors offer some solace. Buried within their findings is the very reasonable perspective that artificial intelligence, which excels at the super-niche tasks for which it is developed, will continue to be developed at an incremental pace and likely won’t do all that much. Yes, AI is sophisticated enough to win at Go, but it's not ready to be in charge of our nukes. At least, not yet.
In short, it will be a while before we have computers deciding when to launch our nukes (though, given the rate of human error and some seriously close calls that resulted from it in the past, it's a matter of opinion whether more human control is actually a good thing).
Share This Article