When you return to school after summer break, it may feel like you forgot everything you learned the year before. But if you learned like an AI system does, you actually would have — as you sat down for your first day of class, your brain would take that as a cue to wipe the slate clean and start from scratch.

AI systems' tendency to forget the things it previously learned upon taking on new information is called catastrophic forgetting.

That's a big problem. See, cutting-edge algorithms learn, so to speak, after analyzing countless examples of what they're expected to do. A facial recognition AI system, for instance, will analyze thousands of photos of people's faces, likely photos that have been manually annotated, so that it will be able to detect a face when it pops up in a video feed. But because these AI systems don't actually comprehend the underlying logic of what they do, teaching them to do anything else, even if it's pretty similar — like, say, recognizing specific emotions — means training them all over again from scratch. Once an algorithm is trained, it's done, we can't update it anymore.

For years, scientists have been trying to figure out how to work around the problem. If they succeed, AI systems would be able to learn from a new set of training data without overwriting most of what they already knew in the process. Basically, if the robots should someday rise up, our new overlords would be able to conquer all life on Earth and chew bubblegum at the same time.

But still, catastrophic forgetting is one of the major hurdles preventing scientists from building an artificial general intelligence (AGI) — AI that's all-encompassing, empathetic, and imaginative, like the ones we see in TV and movies.

In fact, a number of AI experts who attended The Joint Multi-Conference on Human-Level Artificial Intelligence last week in Prague said, in private interviews with Futurism or during panels and presentations, that the problem of catastrophic forgetting is one of the top reasons they don’t expect to see AGI or human-level AI anytime soon.

Catastrophic forgetting is one of the top reasons experts don’t expect to see  human-level AI anytime soon.

But Irina Higgins, a senior research scientist at Google DeepMind, used her presentation during the conference to announce that her team had begun to crack the code.

She had developed an AI agent — sort of like a video game character controlled by an AI algorithm — that could think more creatively than a typical algorithm. It could "imagine" what the things it encountered in one virtual environment might look like elsewhere. In other words, the neural net was able to disentangle certain objects that it encountered in a simulated environment from the environment itself.

This isn't the same as a human's imagination, where we can come up with new mental images altogether (think of a bird — you can probably conjure up an image of what a fictional spherical, red bird might look like in your mind's eye.) The AI system isn't that sophisticated, but it can imagine objects that it's already seen in new configurations or locations.

“We want a machine to learn safe common sense in its exploration so it's not damaging itself,” said Higgins in her speech at the conference, which had been organized by GoodAI. She had published her paper on the preprint server arXiv earlier that week, describing work that allows previously-developed AI agents to continuously learn without forgetting earlier training.

Let’s say you’re walking through the desert (as one does) and you come across a cactus. One of those big, two-armed ones you see in all the cartoons. You can recognize that this is a cactus because you have probably encountered one before. Maybe your office bought some succulents to liven up the place. But even if your office is cactus-free, you could probably imagine what this desert cactus would look like in a big clay pot, maybe next to Brenda from accounting’s desk.

Now Higgins’ AI system can do pretty much the same thing. With just five examples of how a given object looks from various angles, the AI agent learns what it is, how it relates to the environment, and also how it might look from other angles it hasn't seen or in different lighting. The paper highlights how the algorithm was trained to spot a white suitcase or an armchair. After its training, the algorithm can then imagine how that object would look in an entirely new virtual world and recognize the object when it encounters it there.

“We run the exact setup that I used to motivate this model, and then we present an image from one environment and ask the model to imagine what it would look like in a different environment,” Higgins said. Again and again, her new algorithm excelled at the task compared to AI systems with entangled representations, which could predict fewer qualities and characteristics of the objects.

Image Credit: Emily Cho

In short, the algorithm is able to note differences between what it encounters and what it has seen in the past. Like most people but unlike most other algorithms, the new system Higgins built for Google can understand that it hasn't come across a brand new object just because it's seeing something from a new angle. It can then use some spare computational power to take in that new information; the AI system updates what it knows about the world without needing to be retrained and re-learn everything all over again. Basically, the system is able to transfer and apply its existing knowledge to the new environment. The end result is a sort of spectrum or continuum showing how it understands various qualities of an object.

Higgins’ model alone won’t get us to AGI, of course. But it marks an important first step towards AI algorithms that can continuously update as they go, learning new things about the world without losing what they already had.

“I think it’s very crucial to reach anything close to artificial general intelligence,” Higgins said.

“I think it’s very crucial to reach anything close to artificial general intelligence.”

And this work is all still in its early stages. These algorithms, like many other object recognition AI tools, excel at a rather narrow task with a constrained set of rules, such as looking at a photo and picking out a face among many things that are not faces. But Higgins' new AI system is doing a narrow task in such a way that more closely resembles creativity and some digital simulation of an imagination.

And even though Higgins’ research didn’t immediately bring about the era of artificial general intelligence, her new algorithm already has the ability to improve the existing AI systems we use all the time. For instance, Higgins tried her new AI system on a major set of data used to train facial recognition software. After analyzing the thousands and thousands of headshots found in the dataset, the algorithm could create a spectrum of any quality with which those photos have been labeled. As an example, Higgins presented the spectrum of faces ranked by skin tone.

Higgins then revealed that her algorithm was able to do the same for the subjective qualities that also find their ways into these datasets, ultimately teaching human biases to facial recognition AI. Higgins showed how images that people had labeled as “attractive” created a spectrum that pointed straight towards the photos of young, pale women. That means any AI system that had been trained with these photos — and there are many of them out there — now hold the same racist views as do the people who labeled the photos in the first place: that white people are more attractive.

This creative new algorithm is already better than we are when it comes to finding new ways to detect human biases in other algorithms so engineers can go in and remove them.

So while it can’t replace artists quite yet, Higgins' team's work is a pretty big step towards getting AI to imagine more like a human and less like an algorithm.

More on Artificial General Intelligence: Advanced Artificial Intelligence Could Run The World Better Than Humans Ever Could


Share This Article