Wary of AI
Stephen Hawking is, undoubtedly, one of modern society's greatest minds, so a lot of people pay attention whenever he shares his thoughts on the world. Recently, he has been talking about one subject in particular: the future.
Hawking has expressed his opinions about topics that range from extraterrestrial life to artificial intelligence (AI), and of the latter, he has serious misgivings. He isn't against developing AI technology. In fact, he once said AI could be the greatest event in the history of our civilization. But like many other scientists and thinkers in today's world, Hawking is concerned that the rise of AI is bringing with it various negative side effects.
He has already warned us about AI's impact on middle-class jobs, and together with Tesla CEO Elon Musk, he has called for a ban on developing AI robots for military use. He's also worried that AI may take over the world or, worse yet, end it. Our best bet against this AI uprising, he now tells The Times, is the creation of "some form of world government" that could control the technology.
Hawking went on to explain his rationale behind this need for an international governing body:
Since civilization began, aggression has been useful inasmuch as it has definite survival advantages. It is hard-wired into our genes by Darwinian evolution. Now, however, technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war. We need to control this inherited instinct by our logic and reason.
A Guided Vision
In order to keep up with AI, we must be able to adapt. "The development of artificial intelligence could spell the end of the human race,” Hawking said in late 2014. “It would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded."
He doesn't think the machines will be out to get us for any emotional reason, though: "The real risk with AI isn't malice but competence. A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble."
So some form of governing body and the ability to quickly adapt — that's what we need to survive the dawn of AI or superintelligence, according to Hawking.
Fortunately, several institutions working on the former are already in place. Those institutions, such as the Partnership on AI and the Ethics and Governance of Artificial Intelligence Fund (or AI Fund), have begun developing guidelines or frameworks for developing AI more conscientiously. The IEEE has also released the first "guidebook" for ethical AI systems.
Despite this potential for everything to go very, very wrong, Hawking remains optimistic about the future: “All this may sound a bit doom-laden, but I am an optimist. I think the human race will rise to meet these challenges.”
Share This Article