A Different Take
Amidst all the talk about how artificial intelligence (AI) is threatening society with great harm—beginning with taking over human-held jobs and then, eventually, becoming more intelligent and taking over the entire world—some experts believe that AI shouldn’t be feared. Foremost among these experts is Google’s director of engineering and notable “future teller” Ray Kurzweil, who has said time and again that the technological singularity won’t necessarily go down as expected.
Kurzweil discussed the future of AI at the Council on Foreign Relations (CFR) in Washington, D.C. on Friday. And, while he agreed with Tesla CEO and founder Elon Musk who warned of the potential “existential risks” a super-intelligent AI could bring, Kurzweil said that humanity would be able to overcome these “difficult episodes,” if they ever actually happen.
He continued by noting that scientific and technological advancements always come with inherent risks and that AI should not be considered any more (or less) of a threat. “Technology has always been a double-edged sword. Fire kept us warm, cooked our food and burned down our houses,” Kurzweil said, using the example: “World War II – 50 million people died, and that was certainly exacerbated by the power of technology at that time.”
Addressing the concerns over job displacement due to intelligent automation, Kurzweil reiterated a point he previously explained to Fortune. He argued that, while there will be jobs lost, newer ones will be created. What these are, he obviously doesn’t know since they haven’t been invented yet.
He stated his main point by noting that, ultimately, AI will benefit us in the same way that previous technologies have. “My view is not that AI is going to displace us,” he said at the CFR. “It’s going to enhance us. It does already.”
Living With Machines
Indeed, for Kurzweil, the singularity, if it happens, won’t be a machine takeover. Instead, he predicts it to become more like a co-existence, where machines reinforce human abilities. Kurzweil predicts that a hybrid AI would become available by the 2030s. This hybrid AI, he explained, would allow human beings to tap directly into the cloud with just their brains, using what he called a neocortex connection. Kurzweil previously predicted that part of this reinforcement would come from nanobots, which he said would flow throughout our bodies by 2030.
In short, according to Kurzweil, there will be a melding of humans and machines as a result of the singularity and the growth of AI. Kurzweil said that we’re already experiencing this with our smartphones, which he referred to as “brain extenders.” He told the audience at CFR, “I mean, who can do their work without these brain extenders we have today? And that’s going to continue to be the case.”
Kurzweil added that, aside from connecting the human brain to machines via the cloud, these neocortex technologies would also allow humans to connect to another person’s neocortex. As a result, humans would become smarter and funnier. The technological singularity, he argued, would lead to a more diverse group of thinkers and would allow for a deeper expansion into humanity’s various expertise.
So, instead of making us obsolete, Kurzweil predicts that, as machines become more intelligent, humanity will also grow to become smarter. We could only hope that Kurzweil is correct in this prediction.