A New Kind of Intelligence

Google DeepMind has successfully navigated a 3D maze without cheating—it didn't have access to the digital world's internal code. Instead, it walked around walls and into rooms by "sight," as New Scientist reports.

Recently, DeepMind has successfully played the ancient game of "Go." Now , it tackled a 3D maze much like the one in the 1993 Doom game. In the video below, the AI navigates through the maze simply by "sight" or by looking at the screen and deciding on what to do next, just as humans would play it.

How it works

The artificial intelligence relied on a technique called "reinforcement learning," which rewards the system for taking actions that improves its score. This was combined with a deep neural network that analyses and learns patterns on the game screen. It was also able to look back into its memory and study past scenarios, a technique called "experience replay."

Despite the awesome results, the team admitted that there are drawbacks. “It uses more memory and more computation per real interaction,” write the DeepMind team in its latest paper. So it has come up with a technique called asynchronous reinforcement learning, which sees multiple versions of an AI tackling a problem in parallel and comparing their experiences.

The greatest challenge came from a 3D maze game called Labyrinth, a test bed for DeepMind’s tech that resembles Doom without the shooting  The system is rewarded for finding apples and portals, the latter of which teleport it elsewhere in the maze, and has to score as high as possible in 60 seconds.

“This task is much more challenging than [the driving game] because the agent is faced with a new maze in each episode and must learn a general strategy for exploring mazes,” write the team. It succeeded, learning a “reasonable strategy for exploring random 3D mazes using only a visual input”.

Watch it play the car game in the video below.


Share This Article