Outlook Not So Good

Elon Musk has put a lot of thought into the harsh realities and wild possibilities of artificial intelligence (AI). These considerations have left him convinced that we need to merge with machines if we're to survive, and he's even created a startup dedicated to developing the brain-computer interface (BCI) technology needed to make that happen. But despite the fact that his very own lab, OpenAI, has created an AI capable of teaching itself, Musk recently said that efforts to make AI safe only have "a five to 10 percent chance of success."

Musk shared these less-than-stellar odds with the staff at Neuralink, the aforementioned BCI startup, according to recent Rolling Stone article. Despite Musk's heavy involvement in the advancement of AI, he's openly acknowledged that the technology brings with it not only the potential for, but the promise of serious problems.

The challenges to making AI safe are twofold.

First, a major goal of AI — and one that OpenAI is already pursuing — is building AI that's not only smarter than humans, but that is capable of learning independently, without any human programming or interference. Where that ability could take it is unknown.

Then there is the fact that machines do not have morals, remorse, or emotions. Future AI might be capable of distinguishing between "good" and "bad" actions, but distinctly human feelings remain just that — human.

Click to View Full Infographic

In the Rolling Stone article, Musk further elaborated on the dangers and problems that currently exist with AI, one of which is the potential for just a few companies to essentially control the AI sector. He cited Google's DeepMind as a prime example.

"Between Facebook, Google, and Amazon — and arguably Apple, but they seem to care about privacy — they have more information about you than you can remember," said Musk. "There's a lot of risk in concentration of power. So if AGI [artificial general intelligence] represents an extreme level of power, should that be controlled by a few people at Google with no oversight?"

Worth the Risk?

Experts are divided on Musk's assertion that we probably can't make AI safe. Facebook founder Mark Zuckerberg has said he's optimistic about humanity's future with AI, calling Musk's warnings "pretty irresponsible." Meanwhile, Stephen Hawking has made public statements wholeheartedly expressing his belief that AI systems pose enough of a risk to humanity that they may replace us altogether.

Sergey Nikolenko, a Russian computer scientist who specializes in machine learning and network algorithms, recently shared his thoughts on the matter with Futurism. "I feel that we are still lacking the necessary basic understanding and methodology to achieve serious results on strong AI, the AI alignment problem, and other related problems," said Nikolenko.

As for today's AI, he thinks we have nothing to worry about. "I can bet any money that modern neural networks will not suddenly wake up and decide to overthrow their human overlord," said Nikolenko.

Musk himself might agree with that, but his sentiments are likely more focused on how future AI may build on what we have today.

Already, we have AI systems capable of creating AI systems, ones that can communicate in their own languages, and ones that are naturally curious. While the singularity and a robot uprising are strictly science fiction tropes today, such AI progress makes them seem like genuine possibilities for the world of tomorrow.

But these fears aren't necessarily enough reason to stop moving forward. We also have AIs that can diagnose cancer, identify suicidal behavior, and help stop sex trafficking.

The technology has the potential to save and improve lives globally, so while we must consider ways to make AI safe through future regulation, Musk's words of warning are, ultimately, just one man's opinion.

He even said as much himself to Rolling Stone: "I don't have all the answers. Let me be really clear about that. I'm trying to figure out the set of actions I can take that are more likely to result in a good future. If you have suggestions in that regard, please tell me what they are."


Share This Article