Elon Musk and Stephen Hawking: Ban Military Robots Capable of Artificial Intelligence
We must not develop whatever technology we are able; rather, we need to place strong restrictions on the kind of advancements that are made…especially when it comes to robots. This is the assertion recently made by philosophers, scientists, and some of the most prominent leaders of our era. The claim was stated an open letter calling for a ban on autonomous weapons, which was published today (July 28th, 2015) and contains the signatures of several thousand individuals.
There is a difference between technology that is controlled completely by human hands (such as drones, which are driven and discharged by human eyes) and intelligent robots that make their own decisions in relation to targets. The latter, many feel, should not be supported or even tolerated.
The letter was posted by the Future of Life Institute, and was signed by such notable individuals as Stephen Hawking, theoretical physics; Elon Musk, founder of Tesla motors; Daniel Dennett, cognitive scientist; Steve Wozniak, Apple cofounder; and Demis Hassabis, Google DeepMind CEO (along with 39 other Google employees). And that is just the start of the list.
A number of sites claimed that Hawking and others are worried that (gasp!) the robots might one day take over the world. But those sites are being sensationalist. “Elon Musk And Stephen Hawking Sign Open Letter In Hopes Of Preventing Robot Uprising,” the headlines proclaim. Except that the letter does not claim this at all.
Ultimately, it is not about some Matrix future where humanity is enslaved by a host of AIs. Rather, the concern is the technology falling into the wrong hands and igniting an arms race.
The letter opens with a look towards where an unchecked future may take us:
Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.
Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.
According to the signatories, this arms race must not be allowed to take place. We, as a society, need stand up and ensure that our leaders follow the correct (the moral) path. We must not make war easier on ourselves.
In order for our future to truly be secure, the letter claims that no nation must pursue a robotic military governed by artificial intelligence, as doing so will push our world into a technological race that will, in all likeness, rival what we saw during the Cold War between the United States and Russia.
If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.
The primary problem with this, however, is not that nations will start a “who can make the most deadly robot” competition, but that terrorists and everyday individuals would easily be able to purchase (or perhaps even manufacture) the material that is created as a result of such an arms race:
It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.
The letter also makes an appeal to individuals who currently work in AI. Ultimately, since these are the individuals who will be creating the technology, these must be the individuals who take a stand and act as moral compasses for others:
Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.
In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.
Whatever our decisions may be over the next few months and years, the world we see in two decades from now will likely look very little like the one that we know today. Personally, I think that developing AI is an amazing (and amazingly dangerous), and we must proceed with the greatest caution.
Care about supporting clean energy adoption? Find out how much money (and planet!) you could save by switching to solar power at UnderstandSolar.com. By signing up through this link, Futurism.com may receive a small commission.