Elon Musk Leads AI Experts With Letter Urging UN to Consider Threat of Autonomous Weapons

“Lethal autonomous weapons threaten to become the third revolution in warfare.”

8. 21. 17 by Dom Galeon
Wikimedia Commons

A Clear Danger

Elon Musk has long been warning us against the dangers he believes to be inherent to unregulated artificial intelligence (AI) development. He’s called the threat humankind’s biggest risk, and even said that it’s greater than any threat posed by North Korea. While some AI experts have criticized Musk for this, the OpenAI CEO is hardly the only one in the industry that’s offered warnings about the potential danger of AI systems.

In fact, 115 other experts — including DeepMind co-founder Mustafa Suleyman – have joined Musk in calling for stronger regulation for AI. “As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm,” the group wrote in an open letter to the United Nations’ Convention on Certain Conventional Weapons (CCW). “Lethal autonomous weapons threaten to become the third revolution in warfare.”

The UN has just created the Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems (LAWS), which will discuss and study the implications of modern weapons powered by AI. Musk, Suleyman, and the other experts urge the UN to act decisively and clearly on the matter, urging them to “work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.”

Misusing AI

The group of experts, obviously, aren’t against developing AI — after all, they’re all involved in AI work from 26 countries. The problem is how AI is used: the group is wary of is using the technology to build autonomous weapon systems, a trend that’s already begun.

Advertisement

“Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” Clearpath Robotics founder and one of the signatories Ryan Gariepy told The Guardian.

Their letter continues: “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

And as Musk has been saying in regards to AI all along — now’s the best time do implement all the regulation necessary. “We do not have long to act,” he said earlier this month, “Once this Pandora’s box is opened, it will be hard to close.”


Care about supporting clean energy adoption? Find out how much money (and planet!) you could save by switching to solar power at UnderstandSolar.com. By signing up through this link, Futurism.com may receive a small commission.

Advertisement

Share This Article

Keep up.
Subscribe to our daily newsletter to keep in touch with the subjects shaping our future.
I understand and agree that registration on or use of this site constitutes agreement to its User Agreement and Privacy Policy

Advertisement

Copyright ©, Camden Media Inc All Rights Reserved. See our User Agreement, Privacy Policy and Data Use Policy. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Futurism. Fonts by Typekit and Monotype.