WHEN SOPHIA THE ROBOT first switched on, the world couldn't get enough. It had a cheery personality, it joked with late-night hosts, it had facial expressions that echoed our own. Here it was, finally — a robot plucked straight out of science fiction, the closest thing to true artificial intelligence that we had ever seen.
There's no doubt that Sophia is an impressive piece of engineering. Parents-slash-collaborating-tech-companies Hanson Robotics and SingularityNET equipped Sophia with sophisticated neural networks that give Sophia the ability to learn from people and to detect and mirror emotional responses, which makes it seem like the robot has a personality. It didn't take much to convince people of Sophia's apparent humanity — many of Futurism’s own articles refer to the robot as “her.” Piers Morgan even decided to try his luck for a date and/or sexually harass the robot, depending on how you want to look at it.
“Oh yeah, she is basically alive," Hanson Robotics CEO David Hanson said of Sophia during a 2017 appearance on Jimmy Fallon’s Tonight Show. And while Hanson Robotics never officially claimed that Sophia contained artificial general intelligence — the comprehensive, life-like AI that we see in science fiction — the adoring and uncritical press that followed all those public appearances only helped the company grow.
But as Sophia became more popular and people took a closer look, cracks emerged. It became harder to believe that Sophia was the all-encompassing artificial intelligence that we all wanted it to be. Over time, articles that might have once oohed and ahhed about Sophia's conversational skills became more focused on the fact that they were partially scripted in advance.
Ben Goertzel, CEO of SingularityNET and Chief Scientist of Hanson Robotics, isn't under any illusions about what Sophia is capable of. "Sophia and the other Hanson robots are not really 'pure' as computer science research systems, because they combine so many different pieces and aspects in complex ways. They are not pure learning systems, but they do involve learning on various levels (learning in their neural net visual systems, learning in their OpenCog dialogue systems, etc.)," he told Futurism.
But he's interested to find that Sophia inspires a lot of different reactions from the public. "Public perception of Sophia in her various aspects — her intelligence, her appearance, her lovability — seems to be all over the map, and I find this quite fascinating," Goertzel said.
Hanson finds it unfortunate when people think Sophia is capable of more or less than she really is, but also said that he doesn’t mind the benefits of the added hype. Hype which, again, has been bolstered by the two companies’ repeated publicity stunts.
"Sophia and the other Hanson robots are not really 'pure' as computer science research systems..."
Highly-publicized projects like Sophia convince us that true AI — human-like and perhaps even conscious — is right around the corner. But in reality, we're not even close.
The true state of AI research has fallen far behind the technological fairy tales we've been led to believe. And if we don't treat AI with a healthier dose of realism and skepticism, the field may be stuck in this rut forever.
NAILING DOWN A TRUE definition of artificial intelligence is tricky. The field of AI, constantly reshaped by new developments and changing goalposts, is sometimes best described by explaining what it is not.
"People think AI is a smart robot that can do things a very smart person would — a robot that knows everything and can answer any question," Emad Mousavi, a data scientist who founded a platform called QuiGig that connects freelancers, told Futurism. But this is not what experts really mean when they talk about AI. "In general, AI refers to computer programs that can complete various analyses and use some predefined criteria to make decisions."
Among the ever-distant goalposts for human-level artificial intelligence (HLAI) are the ability to communicate effectively — chatbots and machine learning-based language processors struggle to infer meaning or to understand nuance — and the ability to continue learning over time. Currently, the AI systems with which we interact, including those being developed for self-driving cars, do all their learning before they are deployed and then stop forever.
“They are problems that are easy to describe but are unsolvable for the current state of machine learning techniques,” Tomas Mikolov, a research scientist at Facebook AI, told Futurism.
Right now, AI doesn't have free will and certainly isn't conscious — two assumptions people tend to make when faced with advanced or over-hyped technologies, Mousavi said. The most advanced AI systems out there are merely products that follow processes defined by smart people. They can't make decisions on their own.
In machine learning, which includes deep learning and neural networks, an algorithm is presented with boatloads of training data — examples of whatever it is that the algorithm is learning to do, labeled by people — until it can complete the task on its own. For facial recognition software, this means feeding thousands of photos or videos of faces into the system until it can reliably detect a face from an unlabeled sample.
Our best machine learning algorithms are generally just memorizing and running statistical models. To call it "learning" is to anthropomorphize machines that operate on a very different wavelength from our brains. Artificial intelligence is now such a big catch-all term that practically any computer program that automatically does something is referred to as AI.
Artificial intelligence is now such a big catch-all term that practically any computer program that automatically does something is referred to as AI.
If you train an algorithm to add two numbers, it will just look up or copy the correct answer from a table, Mikolov, the Facebook AI scientist, explained. But it can't generalize a better understanding of mathematical operations from its training. After learning that five plus two equals seven, you as a person might be able to figure out that seven minus two equals five. But if you ask your algorithm to subtract two numbers after teaching it to add, it won’t be able to. The artificial intelligence, as it were, was trained to add, not to understand what it means to add. If you want it to subtract, you’ll need to train it all over again — a process that notoriously wipes out whatever the AI system had previously learned.
“It’s actually often the case that it’s easier to start learning from scratch than trying to retrain the previous model,” Mikolov said.
These flaws are no secret to members of the AI community. Yet, all the same, these machine learning systems are often touted as the cutting edge of artificial intelligence. In truth, they're actually quite dumb.
Take, for example, an image captioning algorithm. A few years back, one of these got some wide-eyed coverage because of the sophisticated language it seemed to generate.
“Everyone was very impressed by the ability of the system, and soon it was found that 90 percent of these captions were actually found in the training data,” Mikolov told Futurism. “So they were not actually produced by the machine; the machine just copied what it did see that the human annotators provided for a similar image so it seemed to have a lot of interesting complexity." What people mistook for a robotic sense of humor, Mikolov added, was just a dumb computer hitting copy and paste.
“It’s not some machine intelligence that you’re communicating with. It can be a useful system on its own, but it’s not AI,” said Mikolov. He said that it took a while for people to realize the problems with the algorithm. At first, they were nothing but impressed.
WHERE DID WE GO so off course? The problem is when our present-day systems, which are so limited, are marketed and hyped up to the point that the public believes we have technology that we have no goddamn clue how to build.
"I am frequently entertained to see the way my research takes on exaggerated proportions as it progresses through the media," Nancy Fulda, a computer scientist working on broader AI systems at Brigham Young University, told Futurism. The reporters who interview her are usually pretty knowledgeable, she said. "But there are also websites that pick up those primary stories and report on the technology without a solid understanding of how it works. The whole thing is a bit like a game of 'telephone' — the technical details of the project get lost and the system begins to seem self-willed and almost magical. At some point, I almost don't recognize my own research anymore."
"At some point, I almost don't recognize my own research anymore."
Some researchers themselves are guilty of fanning this flame. And then the reporters who don't have much technical expertise and don't look behind the curtain are complicit. Even worse, some journalists are happy to play along and add hype to their coverage.
Other problem actors: people who make an AI algorithm present the back-end work they did as that algorithm’s own creative output. Mikolov calls this a dishonest practice akin to sleight of hand. “I think it’s quite misleading that some researchers who are very well aware of these limitations are trying to convince the public that their work is AI,” Mikolov said.
That's important because the way people think AI research is going will depend on whether they want money allocated to it. This unwarranted hype could be preventing the field from making real, useful progress. Financial investments in artificial intelligence are inexorably linked to the level of interest (read: hype) in the field. That interest level — and corresponding investments — fluctuate wildly whenever Sophia has a stilted conversation or some new machine learning algorithm accomplishes something mildly interesting. That makes it hard to establish a steady, baseline flow of capital that researchers can depend on, Mikolov suggested.
Mikolov hopes to one day create a genuinely intelligent AI assistant — a goal that he told Futurism is still a distant pipedream. A few years ago, Mikolov, along with his colleagues at Facebook AI, published a paper outlining how this might be possible and the steps it might take to get there. But when we spoke at the Joint Multi-Conference on Human-Level Artificial Intelligence held in August by Prague-based AI startup GoodAI, Mikolov mentioned that many of the avenues people are exploring to create something like this are likely dead ends.
One of these likely dead ends, unfortunately, is reinforcement learning. Reinforcement learning systems, which teach themselves to complete a task through trial and error-based experimentation instead of using training data (think of a dog fetching a stick for treats), are often oversold, according to John Langford, Principal Researcher for Microsoft AI. Almost anytime someone brags about a reinforcement-learning AI system, Langford said, they actually gave the algorithm some shortcuts or limited the scope of the problem it was supposed to solve in the first place.
The hype that comes from these sorts of algorithms helps the researcher sell their work and secure grants. Press people and journalists use it to draw audiences to their platforms. But the public suffers — this vicious cycle leaves everyone else unaware as to what AI can really do.
There are telltale signs, Mikolov says, that can help you see through the misdirection. The biggest red flag is whether or not you as a layperson (and potential customer) are allowed to demo the technology for yourself.
“A magician will ask someone from the public to test that the setup is correct, but the person specifically selected by the magician is working with him. So if somebody shows you the system, then there’s a good likelihood you are just being fooled,” Mikolov said. “If you are knowledgeable about the usual tricks, it’s easy to break all these so-called intelligent systems. If you are at least a little bit critical, you will see that what [supposedly AI-driven chatbots] are saying is very easy to distinguish from humans.”
Mikolov suggests that you should question the intelligence of anyone trying to sell you the idea that they’ve beaten the Turing Test and created a chatbot that can hold a real conversation. Again, think of Sophia's prepared dialogue for a given event.
"Maybe I should not be so critical here, but I just can’t help myself when you have these things like the Sophia thing and so on, where they’re trying to make impressions that they are communicating with the robot at so on,” Mikolov told Futurism."Unfortunately, it's quite easy for people to fall for these magician tricks and fall for the illusion, unless you're a machine learning researcher who knows these tricks and knows what's behind them."
Unfortunately, so much attention to these misleading projects can stand in the way of progress by people with truly original, revolutionary ideas. It's hard to get funding to build something brand new, something that might lead to AI that can do what people already expect it to be able to do, when venture capitalists just want to fund the next machine learning solution.
If we want those projects to flourish, if we ever want to take tangible steps towards artificial general intelligence, the field will need to be a lot more transparent about what it does and how much it matters.
“I am hopeful that there will be some super smart people who come with some new ideas and will not just copy what is being done,” said Mikolov. “Nowadays it’s some small, incremental improvement. But there will be smart people coming with new ideas that will bring the field forward.”
More on the nebulous challenges of AI: Artificial Consciousness: How To Give A Robot A Soul
Share This Article