In BriefTesla CEO and founder Elon Musk made special mention of artificial intelligence at the Q&A part of an earnings call Wednesday. Responding to questions about the subject, Musk clarified what he meant when he referred to AI as mankind's biggest risk.
Not Against AI
Speaking at the Q&A portion of Wednesday’s conference call with Tesla investors, CEO and founder Elon Musk once again brought up his concerns over the development of artificial intelligence (AI). And, just as when he warned a group of U.S. governors about the potential risks of the technology, Musk said that he’s not at all against the pursuit of AI.
“I’m not advocating we stop development of AI or any of the straw man hyperbole things that have been written,” Musk said, in response to a question raised during the conference call. In fact, he’s the chairman and co-founder of OpenAI, a non-profit dedicated to “discovering and enacting the path to safe artificial general intelligence.”
Just like OpenAI’s goals, Musk’s warnings have all been about the need for clear standards in developing AI. Musk said, “AI just something that I think anything that represents a risk to the public deserves at least insight from the government, because one of the mandates of the government is the public wellbeing.”
Caution Vs. Alarmism
While some AI researchers previously expressed concerns over Musk’s seemingly alarmist tone at that gathering of U.S. governors, there are also a good number of AI experts who agree with the kind of watchfulness the Tesla CEO has been advocating. Various groups have put forward certain principles in developing AI, including the IEEE’s guidelines for ethically aligned AI and the Asilomar Principles developed during the 2017 Beneficial AI Conference.
The concern, as Musk clarified, is more with how people use AI. “I do think there are many great benefits to AI, we just need to make sure that they are indeed benefits and we don’t do something really dumb,” he said during the call.
In case things do go wrong, OpenAI isn’t Musk’s only option to give humanity a fighting chance. His new Neuralink venture, for instance, wants to meld the human mind with machines. If that also doesn’t cut it, SpaceX is working towards getting humankind to Mars — an option considered by physicist Stephen Hawking to be a potential escape from an AI doomsday.
For Musk, however, the more immediate need is for government to understand AI better and to develop clear guidelines. “Insight is different from oversight,” he said during the call. “At least if the government can gain insight to understand what’s going on and then decide what rules are appropriate to ensure public safety, that is what I’m advocating for.”