DeepMind “Never Found the Limit” of AlphaGo Zero’s Intelligence
"We needed the computers for something else."
Alphabet’s DeepMind has been making incredible strides in the field of artificial intelligence (AI). Their AI can create pictures based on sentences, play StarCraft, and explore strange environments. It has also developed memory and is imagining solutions to problems.
AlphaGo, an AI, was created by DeepMind in order to conquer the oldest game in the world: Go; an incredibly popular game known for being even more complex than chess. What better game to test an AI on?
AlphaGo learned to play the game by studying thousands of Go games from players of all skill levels. It went on to beat reigning Go champions, including Lee Sedol, who has won 18 world titles. A later version of the AI, AlphaGo Zero (AGZ), learned to play by challenging itself to games. AGZ was able to win a game against Alpha Go, arguably making it the best Go player in the world.
It’s impressive that AGZ was able to reach this level of expertise entirely by itself, with no help or input from researchers. The team points out that skipping the human input stage is an advantage: “for some problems this human knowledge may be too expensive, too unreliable or simply unavailable.” However, what’s really striking is that AGZ could have gone even further in developing its skills.
DeepMind CEO and co-founder Demis Hassabis, speaking at Google’s Go North conference, said, about AGZ, “We never actually found the limit of how good this version of AlphaGo could get. We needed the computers for something else.”
AGZ could be powered on again, which could help human Go players learn new moves and strategies. The AI could also be redirected to other tasks. The researchers say these tasks could be as varied as “protein folding, reducing energy consumption or searching for revolutionary new materials.” AGZ has the potential to create a better society, and this team is invested in developing it further.