Mimicking The Human Brain
Since 2014, DeepMind has been playing Atari video games. Initially, its machine learning systems could learn to win games and beat human scores, but couldn't remember how it managed to do it. Therefore, for each Atari game, a new neural network was created. DeepMind never benefitted from its own experience—until now.
A team of researchers from DeepMind and Imperial College London created an algorithm that bestows memory on the system, allowing it to learn, retain knowledge, and reuse it. The system uses supervised learning and reinforcement learning tests to learn in sequences.
In the human brain, synaptic consolidation is the basis for continual learning. Saving learned knowledge and transferring it from task to task is critical to the way humans learn. The inability to do that has been a key failure in machine learning. The algorithm, called “elastic weight consolidation” (EWC), chooses the most useful parts of what helped the machine play and win games in the past, then transfers only those parts forward.
Higher Level Applications
The system is impressive, but isn't perfect yet. DeepMind can now retain the most important information from its previous experiences in order to learn, but despite that huge bank of experiences, it still can't perform as well as a neural network that completes a single game. Efficiency of learning is the next step if machine learning is to match — or eventually eclipse — real-world learning.
Elastic weight consolidation is a core component of any intelligence — biological or artificial — because it enables the thinker to learn tasks in succession without forgetting. The new DeepMind algorithm supports continual learning just like the synaptic consolidation of the human brain, which is the next step for AI in terms of mastering more challenging tasks and learning contexts. In other words, it will mean that AI systems are better able to take on creative and intellectual challenges; previously thought to be the sole province of humankind.
Share This Article