Catastrophic Forgetting — Forgotten

The latest research from DeepMind is proving how inspired the idea to model neural networks of the human mind truly was. The strength of the association between the human brains and and their computational models is revealing weaknesses in our own minds and teaching us how to overcome them.

Google's engineers, inspired by neuroscience, have created an (AI) using an that can hang onto knowledge as it moves from task to task, spinning the straw of raw memory into gold that stays with the program, forming long-term experiences. And while human minds do this after a fashion, we are not as adept at discerning what is important. You may recall the song lyrics you heard when you first rode a bicycle as well as you recall important information from your career successes.

Credit: DeepMind

Neuroscience has gradually pulled back the curtain, making the long-term pruning mechanisms of the human mind more plain to us, revealing how our minds protect the most important information from being overwritten by less useful data. This is what DeepMind is now reproducing in its AI, as evidenced by something relatively prosaic: progress in the Atari gaming universe.

Here AI learns what isn't as salient through sheer numbers and brute force. If you had to process your way through billions of experiences, you might have a more discerning sense of what mattered. Moreover, you might be able to shape your memory to match that awareness. However, the AI can do that now only because it knows what it must hang onto, as well.

In other words, DeepMind is getting past catastrophic forgetting, which is the tenancy to replace crucial older information with newer memories. When it comes to the Atari challenge, this means that there's no longer any need for the AI to reinvent the wheel with every game — it can remember the wheel it invented three games ago.

Learning Elastic Weight Consolidation

The new DeepMind approach, elastic weight consolidation, is infinitely better than the processes that came before it. Elastic weight consolidation simply means the ability to weight the protection assigned to synapses, ranking them to be more or less likely to be overwritten in the future. This way, often-used synapses associated with important skills are preserved, the least important synapses are lost first, and the neural networks don't expand to unwieldy sizes.

Click to View Full Infographic

These AI neural network advances are holding a mirror up for us, allowing us to see the foibles of the human mind and memory in more useful ways. As we see AI networks start to protect synapses systematically, we can see a blueprint for studies of specific human brain regions. Evolution in its natural state is too slow for us to watch and learn from; the evolution of an AI system, on the other hand, can offer us a model and organizational strategy for memory preservation.

Elastic weight consolidation is at the heart of both AI and human intelligence because it enables learning task after task without forgetting. While the new DeepMind algorithm models the synaptic consolidation of the human brain, it may also improve on the process and find new ways to efficiently maintain existing knowledge while absorbing new data. As AI systems take on creative challenges with greater success (areas we believed to be the domain of humankind), we can learn from the ways they solve problems of lost knowledge, faulty overriding, and other miscalculations, or — as such problems are known to humans — forgetting.

Because neural networks are modeled after our own brains, they are engaging in a kind of “psychiatric plagiarism” to bring about their own evolution. The recreation of natural evolution in code provides us with a window into our own neurological development. This is the beauty and possibility that AI is presenting to us: the ability to design new studies and experiments for our own brains.


Share This Article