AI Researchers Disagree With Elon Musk’s Warnings About Artificial Intelligence
"Mr. Musk’s megaphone seems to be rather unnecessarily distorting the public debate.”
Distorting the Debate?
The fear of super-intelligent machines is as real as it gets for Tesla and SpaceX CEO and founder Elon Musk. He’s spoken about it so many times, but perhaps not in the strongest terms as when he told U.S. governors that artificial intelligence (AI) poses “a fundamental risk to the existence of human civilization.” The comment caught the attention of not just the governors present, but also AI researchers — and they’re not very happy about it.
“While there needs to be an open discussion about the societal impacts of AI technology, much of Mr. Musk’s oft-repeated concerns seem to focus on the rather far-fetched super-intelligence take-over scenarios,” Arizona State University computer scientist Subbarao Kambhampati told Inverse. “Mr. Musk’s megaphone seems to be rather unnecessarily distorting the public debate, and that is quite unfortunate.”
Kambhampati, who also heads the Association for the Advancement of AI and is a trustee for the Partnership for AI, wasn’t the only one who reacted to Musk’s most recent AI warning. Francois Chollet and David Ha, deep learning researchers at Google, also took to Twitter to defend AI and machine learning (ML).
AI/ML makes a few existing threats worse. Unclear that it creates any new ones.
— François Chollet (@fchollet) July 16, 2017
University of Washington in Seattle researcher Pedro Domingos simply tweeted a “sigh” of disbelief.
Is There Really an AI Threat?
Both Kambhampati and Ha commented on the premise that Musk — because of his work in OpenAI, in developing self-driving technologies in Tesla, and his recent Neuralink project — has access to cutting edge AI technologies so knows what he’s talking about. “I also have access to the very most cutting-edge AI and frankly I’m not impressed at all by it,” Ha said in another tweet.
Kambhampati, meanwhile, pointed out to the 2016 AI report by the Obama administration that made some very timely but positive recommendations about AI regulations and policies. The White House report didn’t have “the super-intelligence worries that seem to animate Mr. Musk,” Kambhampati said to Inverse, which is a strong indicator that these concerns are not well-founded.
It seems unfair, however, that Musk is getting all the attention when he’s not the only person who’s spoken about the threat of super-intelligence. Famous physicist Stephen Hawking has always made comments about an AI apocalypse. The real question is: should we really fear AI?
With the current state of AI, there seems to be nothing much to fear. While the technology has seen tremendous advances recently, and some experts think that we’re closer to reaching the technological singularity (when computers surpass human-level intelligence), current AI isn’t as advanced as those doomsday robots we see in science fiction. Nor is it clear that they will ever be.
Notable futurist and “singularity enthusiast” Ray Kurzweil even thinks that the singularity won’t be something we should fear. If anything, what’s more frightening is how we make use of the AI. That’s why the best course right now is to pursue AI research with clear goals and guidelines. So, Musk is right in saying that regulation is necessary. But Kambhampati, Chollet, and Ha are also right that there’s no need for alarmism.