A Fundamental Risk
As a guest speaker at the 2017 National Governors Association Summer Meeting, entrepreneur and innovator Elon Musk covered a number of topics, including artificial intelligence (AI). During his talk, the Tesla and SpaceX CEO and founder urged the U.S. governors present on Saturday to set up regulations for the development of AI.
Musk is very familiar with the topic of AI, as he's spoken about it a number of times. To be clear, he isn't afraid of AI itself. What scares him is what could happen if AI is left unchecked. In fact, he calls it potentially the “biggest risk we face as a civilization.”
One solution, he said, is early regulation. “Normally, the way regulations are set up is a whole bunch of bad things happen, there’s a public outcry, and after many years, a regulatory agency is set up to regulate that industry,” said Musk. “It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization.”
Preparing for an AI Future
Several of the governors asked Musk how it would be possible regulate an industry that's so new, and he replied that the first step is getting a firm grasp on it: “The first order of business would be to try to learn as much as possible, to understand the nature of the issues.” That's what Musk has been doing through his non-profit AI research company OpenAI.
One is Neuralink, which would give humanity the ability to keep up with AI by essentially incorporating the technology into ourselves. Another is SpaceX's plan to reach Mars. If successful, this would ensure humanity's survival by giving us a potential second home in case AI takes over the Earth.
While there's still time, however, Musk is pushing for proactive regulation. "Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal," he told the governors. "AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late."