Working on artificial intelligence can be a vicious cycle. It’s easy to lose track of the bigger picture when you spend an entire career developing a niche, hyper-specific AI application. An engineer might finally step away and realize that the public never actually needed such a robust system; each of the marginal improvements they’ve spent so much time on didn’t mean much in the real world.
Still, we need these engineers with lofty, yet-unattainable goals. And one specific goal still lingers in the horizon for the more starry-eyed computer scientists out there: building a human-level artificial intelligence system that could change the world.
Coming up with a definition of human-level AI (HLAI) is tough because so many people use it interchangeably with artificial general intelligence (AGI) – which is the thoughtful, emotional, creative sort of AI that exists only in movie characters like C-3PO and “Ex Machina’s” Ava.
Human-level AI is similar, but not quite as powerful as AGI, for the simple reason that many in the know expect AGI to surpass anything we mortals can accomplish. Though some see this as an argument against building HLAI, some experts believe that only an HLAI could ever be clever enough to design a true AGI – human engineers would only be necessary up to a certain point once we get the ball rolling. (Again, neither type of AI system exists nor will they anytime soon.)
At a conference on HLAI held by Prague-based AI startup GoodAI in August, a number of AI experts and thought leaders were asked a simple question: “Why should we bother trying to create human-level AI?”
For those AI researchers that have detached from the outside world and gotten stuck in their own little loops (yes, of course we care about your AI-driven digital marketplace for farm supplies), the responses may remind them why they got into this line of work in the first place. For the rest of us, they provide a glimpse of the great things to come.
For what it’s worth, this particular panel was more of a lightning round — largely for fun, the experts were instructed to come up with a quick answer rather than taking time to deliberate and carefully choose their words.
“Why should we bother trying to create human-level AI?”
Ben Goertzel, CEO at SingularityNET and Chief Scientist at Hanson Robotics
AI is a great intellectual challenge and it also has more potential to do good than any other invention. Except superhuman AI which has even more.
Tomas Mikolov, Research Scientist At Facebook AI
[Human-Level AI will give us] ways to make life more efficient and basically guide [humanity.]
Kenneth Stanley, Professor At University Of Central Florida, Senior Engineering Manager And Staff Scientist At Uber AI Labs
I think we’d like to understand ourselves better and how to make our lives better.
Pavel Kordik, Associate Professor at Czech Technical University and Co-founder at Recombee
To create a singularity, perhaps.
Ryota Kanai, CEO at ARAYA
To understand ourselves.
More on the future of AI: Five Experts Share What Scares Them the Most About AI