It’s Really Hard to Give AI “Common Sense”
Think common sense is scarce in humans? In AI, it's nonexistent.
In humans, common sense is relatively easy to identify, albeit a bit difficult to define. Get in line at the end of it? That’s common sense. Grab the red-hot end of a metal poker that was in the fire moments before? Not so much.
How do we teach something as nebulous as common sense to artificial intelligence (AI)? Many researchers have tried to do so and failed.
But that might soon change. Now, Microsoft co-founder Paul Allen is joining their ranks.
Allen is investing an additional $125 million into his nonprofit computer lab, the Allen Institute for Artificial Intelligence (AI2), doubling its budget for the next three years, according to The New York Times. This influx of money will go toward existing projects as well as Project Alexandria, a new initiative focused on teaching “common sense” to robots.
“When I founded AI2, I wanted to expand the capabilities of artificial intelligence through high-impact research,” said Allen in a press release. “Early in AI research, there was a great deal of focus on common sense, but that work stalled. AI still lacks what most 10-year-olds possess: ordinary common sense. We want to jump-start that research to achieve major breakthroughs in the field.”
However, even these advanced machines can’t handle more than simple questions and commands. How might one them approach an unfamiliar situation and use “common sense” to calibrate the appropriate action and response? Right now, it can’t.
“Despite the recent AI successes, common sense — which is trivially easy for people — is remarkably difficult for AI,” Oren Etzioni, the CEO of AI2, said in the press release. “No AI system currently deployed can reliably answer a broad range of simple questions, such as, ‘If I put my socks in a drawer, will they still be in there tomorrow?’ or ‘How can you tell if a milk carton is full?'”
“For example, when AlphaGo beat the number one Go player in the world in 2016, the program did not know that Go is a board game,” Etzioni added.
There’s a simple reason we’ve failed to teach AI common sense up to this point: it’s really, really hard.
Gary Marcus, the founder of the Geometric Intelligence Company, drew inspiration from the ways in which children develop common sense and a sense of abstract thinking. Imperial College London researchers focused on symbolic AI, a technique in which a human labels everything for an AI.
Neither strategy has so far resulted in what we could define as “common sense” for robots.
Project Alexandria will take a far more robust approach to the problem. According to the press release, it will integrate research machine reasoning and computer vision, and figure out a way to measure common sense. The researchers also plan to crowdsource common sense from humans.
“I am hugely excited about Project Alexandria,” Gary Marcus, founder of AI lab Geometric Intelligence, said in the press release. “The time is right for a fresh approach to the problem.”
The task is daunting. But if AI is going to reach the next level of utility and integration into even more facets of human lives, we’ll have to overcome it. Project Alexandria might be the best shot at doing so.
As a Futurism reader, we invite you join the Singularity Global Community, our parent company’s forum to discuss futuristic science & technology with like-minded people from all over the world. It’s free to join, sign up now!