Today, many of the world’s leading companies are in a one-of-a-kind race: To bring artificial intelligence (AI) to life. Already, machine learning systems are the core of many businesses, so it’s no surprise that updates about this AI or that neural net often pop up on our newsfeed. Such headlines typically read along the lines of, “AI beats human players in video game” or “AI mimics human speech” and even sometimes things like “AI detects cancer using machine learning.”
But just how close are we to having machines with the intelligence of a human—machines that we can talk with and work with like we do any other individual? Machines that are conscious?
While all of the aforementioned developments are real, Yann LeCun, Director of AI Research at Facebook and a professor of computer science at NYU, thinks that we may be overestimating the abilities of today’s AI, and, thus building up a bit of hype. “We’re very far from having machines that can learn the most basic things about the world in the way humans and animals can do,” LeCun told The Verge in an interview published last week. “Like, yes, in particular areas machines have superhuman performance, but in terms of general intelligence we’re not even close to a rat.”
This so-called artificial general intelligence (AGI) refers to an AI operator capable of performing virtually every task a human being could. Conversely, today’s AI specialize in particular tasks: for example, image or speech recognition, or identifying patterns by sifting through tons of data that machine learning AIs have been trained on. These specialized AIs are also called “applied AI” or “narrow AI” to highlight their rather limited intelligence.
Speaking to Futurism via email, Manuel Cebrian, one of the MIT researchers that developed Shelley, an AI horror storyteller, agreed with LeCun’s sentiments. “AI is just a great tool,” he said, adding that, “it seems to me, based on my work with Shelley, that AI is very far from being able to create professional-level horror fiction.” And thus, still quite far from human levels of intelligence.
LeCun clarifies that we shouldn’t devalue the significant work that AI researchers have made in recent months and years, but that work in machine learning and neural networks is not the same as developing true artificial intelligence. “So for example, and I don’t want to minimize at all the engineering and research work done on AlphaGo by our friends at DeepMind, but when [people interpret the development of AlphaGo] as significant progress towards general intelligence, it’s wrong,” LeCun added. “It just isn’t.”
Pierre Barreau, CEO of Aiva Technologies, the company behind the music-composing AI Aiva, also thinks that the advancements that we have made towards synthetic intelligence are overstated. “AGI is a very hyped topic,” he noted via email. “I am, in general, quite optimistic about how fast tech develops, but I think a lot of people don’t realize the complexity of our own brain, let alone creating an artificial one.”
People often use AI-related terms as synonymous with true artificial intelligence. News coverage drops terms like machine learning or deep learning together with artificial neural networks whenever AI is discussed. While each of these have something to do with AI, these aren’t exactly AI per se.
Machine learning is a tool: a set of algorithms that learn by ingesting huge amounts of data, from which an intelligent system is constructed. Similarly, deep learning refers to a kind of machine learning that is not task-specific. An artificial neural network, on the other hand, is a system that mimics the way the human brain works, and upon which machine learning algorithms are built.
All of these, AI experts believe, are the foundation for a synthetic intelligence with truly human cognition. But this is just the nascent stage; we have made a lot of progress, but current research isn’t really close to creating true intelligence.
So the big question is, when can we expect to have this type of intelligent AI? What’s the specific timeline?
For Luke Tang, general manager of AI startup accelerator TechCode, the shift will start with a “breakthrough in unsupervised learning algorithms.” Once this is accomplished, “machine intelligence can quickly surpass human intelligence,” he said in a statement sent to Futurism.
Needless to say, the path to this will be quite challenging. “In order to achieve AGI, there will need to be major breakthroughs not just in software, but also in Neuroscience and Hardware,” Barreau explained. He clarified, “We are starting to hit the ceiling of Moore’s law, with transistors being as small as they can physically get. New hardware platforms like quantum computing have not yet shown that they can beat performances of our usual hardware in all tasks.”
Indeed, for an AI to be considered truly intelligent, most agree that it has to pass at least five tests, foremost of which is the Turing Test—where a machine and a human both converse with a second human being, who will determine which one is a machine. Barreau said that he’s confident that we will see in our lifetime an AI passing the Turing Test; i.e., that it would pass as a human being. However, he says this won’t necessarily be “AGI, but good enough to pass as AGI.”
It goes without saying that an AGI is the prerequisite for the so-called singularity. If you aren’t familiar with the concept of “singularity,” it’s essentially that moment when intelligent machines surpass humankind’s levels of intelligence, spurring runaway and exponential technological growth that will transform the foundations of life as we know it. The term was coined in 1993 by Vernor Vinge, who wrote: “We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.”
While this is something SoftBank CEO Masayoshi Son and Google’s Ray Kurzweil are excitedly looking forward to, other brilliant minds of today, such as Elon Musk, Stephen Hawking, and even Bill Gates, aren’t quite as keen for this moment. They assert that, in the same way that we don’t really understand what it means to have a super-intelligent AI, we’re also not prepared for whatever consequences the singularity would bring.
But what if we shift our perspective a bit? Instead of looking at AI as humanity’s downfall, why not see it as a partner? Musk seems to hint on this with his Neuralink project, while Kurzweil mentioned this when he talked about nanobots living inside us, augmenting our capabilities. The key word here is augmenting, something Google’s current push for AI seems to be setting the ground for.
“We should focus our efforts on an exciting outcome of AI: augmented intelligence (i.e. human intelligence being augmented by AI),” Barreau said. Like Aiva and Shelley, other AIs have done considerably well when working side-by-side with human beings.
Still, with intelligent robots like Hanson Robotics’ Sophia and SotfBank’s Pepper, it does not seem very far-fetched to imagine truly intelligent machines living among us. Could Masayoshi Son’s super-intelligent AI, with an IQ of 10,000, be the cognitive machine intelligence we’re looking for? If that’s the case, we might have to wait for at least three more decades. “It’s probably only 30 to 50 years away,” Tang said. “So it is likely–it will just take some time to get there. But it also means many of us will have a chance to see that day come!”