In Brief
  • Using a new system titled a Differential Neural Computer (DNC), Google's DeepMind is able to draw from data stores to learn.
  • A DNC allows DeepMind to come up with new solutions without having to learn all possible answers, bringing us closer to a computer with the ability to reason.

Differential Neural Computer

The artificial intelligence that beat human players in Go can now learn from its own memory. Google’s DeepMind AI, according to its programmers, is now capable of intelligently building on what’s already inside its memory.

DeepMind is now equipped with a system called Differential Neural Computer (DNC). It’s a hybrid system that uses the vast data bank of conventional computers, paired with a neural network. “These models… can learn from examples like neural networks, but they can also store complex data like computers,” DeepMind researchers Alexander Graves and Greg Wayne wrote in a blog post.

The DNC combines AI’s neural network approach with external memory (similar to your external hard drives). Neural networks simulate brain capabilities using massive interconnected nodes that work dynamically.The DNC continually optimizes its responses, becoming more and more accurate over time, without any extra help.

Like the Human Brain

What’s fascinating about the DNC is that it works out information on its own, being able to effectively juggle huge amount of data in its memory all at once. In short, it functions like a human brain — using data from memory to figure out new information.

That’s how we work, right?

One way our brain makes decisions is by using experience — memory! DeepMind can do this now, thanks to the DNC. Explaining it in Nature, the researchers said:

Like a conventional computer, [a DNC] can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols.

These connections are easily made by the human brain, of course, but this is a first for AI. Without learning every possible answer beforehand, DeepMind can figure things out independently with just its memory.

It’s a step towards AI that can reason by itself.