When Will We Have Artificial Intelligence As Smart as a Human? Here’s What Experts Think
Robots in the movies can think creatively, continue learning over time, and maybe even pass for conscious. Why don't we have that yet?
“Star Wars,” “Her,” and “iRobot.” What do all these movies have in common? The artificial intelligence (AI) depicted in there is crazy-sophisticated. These robots can think creatively, continue learning over time, and maybe even pass for conscious.
Real-life artificial intelligence experts have a name for AI that can do this — it’s Artificial General Intelligence (AGI). For decades, scientists have tried all sorts of approaches to create AGI, using techniques such as reinforcement learning and machine learning. No approach has proven to be much better than any other, at least not yet.
Indeed, there’s a catch here: despite all the excitement, we have no idea how to build AGI.
At The Joint Multi-Conference on Human-Level Artificial Intelligence held last month in Prague, AI experts and thought leaders from around the world shared their goals, hopes, and progress towards human-level AI (HLAI), which is the last stop before true AGI or the same thing, depending on who you ask.
Either way, most experts think it’s coming — sooner rather than later. In a poll of conference attendees, AI research companies GoodAI and SingularityNet found that 37 percent of respondents think people will create HLAI within 10 years. Another 28 percent think it will take 20 years. Just two percent think HLAI will never exist.
Almost every expert who had an opinion hedged their bets — most responses to the question were peppered with caveats and “maybes.” There’s a lot that we don’t know about the path towards HLAI, such as questions over who will pay for it or how we’re going to combine our algorithms that can think or reason but can’t do both.
Futurism caught up with a number of AI researchers, investors, and policymakers to get their perspective on when HLAI will happen. The following responses come from panels and presentations from the conference and exclusive interviews.
Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics, UNICRI, United Nations
At the moment, there is absolutely no indication that we are anywhere near AGI. And no one can say with any kind of authority or conviction that this would happen within a certain time frame. Or even worse, no one can say this can even happen period. We may never have AGI, so we need to take that into account when we are discussing anything.
Seán Ó hÉigeartaigh, Executive Director of the Cambridge Center for the Study of Existential Risk
There’s still a lot of work to be done; there are still many things we don’t understand. Given we have this understanding, maybe it’s possible that it happens within 50 years.
John Langford, Principal Researcher at Microsoft AI
I think we should enjoy the technology while it advances. We should be looking out for where to go in the future. But on the other hand, it’s not like we have human-level AI right now and I don’t think it’s going to happen very quickly. I think that if I’m lucky it’ll happen in my lifetime.
A worm’s level of intelligence is actually pretty doable. If you try to look at vision and planning, this is kind of narrowly doable. The integration of planning and learning, planning as its own thing is pretty well solved. But planning in a way which works with [machine learning] is not very well solved.
Marek Rosa, CEO and CTO at GoodAI
I think we are almost there. I am not predicting we will have general AI in three years, 30 years. But I am confident it can happen any day.
Ben Goertzel, CEO at SingularityNET and Chief Scientist at Hanson Robotics
I don’t think we need fundamentally new algorithms. I think we do need to connect our algorithms in different ways than we do now. If I’m right, then we already have the core algorithms that we need… I believe we are less than ten years from creating human-level AI.
Hava Siegelmann, Program Manager at DARPA
I don’t think we’re almost there in the technology for General AI. I think general AI is almost a branding for a very general idea. Lifelong learning is an example of that — it’s a very particular type of AI. We know the theoretical foundation of that already, we know how nature does it, and it’s very well defined. There is a very clear direction, there is a metric. I think we can reach it in a close time.
On the last day of the conference, a number of participants participated in a lightning round of sorts. Almost entirely for fun, these experts were encouraged to throw out a date at which they expected us to figure out how to make HLAI. The following answers, some of whom were given by the same people who already answered the question, should be taken with an entire shaker of salt — some were meant as jokes and others are total guesses.
Maybe 20 [years]?
I really have no idea which year, but if I have to say one year I’d say ten years in the future. The reason is its kind fo vague, you know like anything can happen in ten years.
Ryota Kanai, CEO at ARAYA
Pavel Kordik, Associate Professor at Czech Technical University and Co-founder at Recombee
I think it’s very hard to define human-level AI, so it might come in five years or later.
Kenneth Stanley, Professor at University of Central Florida and Senior Research Scientist at Uber AI Labs
I’m between 20 and 2,000 years.
Based on the recent progress, I would guess it will come in 50 years.
It will occur on December 8, 2026, which will be my 60th birthday. I will delay it until then just to have a great birthday party.
More On expert predictions of AI: Five Experts Share What Scares Them the Most About AI
Care about supporting clean energy adoption? Find out how much money (and planet!) you could save by switching to solar power at UnderstandSolar.com. By signing up through this link, Futurism.com may receive a small commission.