Reading Humanity

Imagine a flying saucer lands in Time Square and an alien steps out carrying the game of Go.  He walks up the nearest person and says the classic line – “Take me to your best player.”  Now, let’s assume that the alien spent years studying how humans play Go, watching replays of every major match.

If that was the situation, it would seem Humanity was being set up for an unfair challenge.

After all, the alien had the opportunity to thoroughly prepare for playing humans, while the humans had no opportunity to prepare for playing aliens.  The humans would likely lose.  And that’s exactly what happened last month when an "alien intelligence" named AlphaGo played the human Go master, Lee Sedol.  The human lost in 4 out of 5 games.  But, if we look at the big picture, it wasn’t a fair match.

Still the media went wild, deeming the victory a historic milestone in A.I. research, an unexpected leap that took the scientific community by surprise.  I agree completely.  It was a massive leap and major milestone, but not because it demonstrated that A.I. is good at playing the game of Go.

No, this victory demonstrated that A.I. is good at playing the game of humans.

After all, AlphaGo didn’t learn to play by studying the rules and thinking up a clever strategy.  It learned by studying how people play, processing thousands upon thousands of matches to characterize how masters make moves – how they react to moves – what mistakes they’re likely to make – what gambits they’re likely to miss.  All told, the system trained by reviewing 30 million moves by expert players.  Thus, AlphaGo is not a system designed to optimize play of some abstract game.  No, it’s a system that optimizes beating humans by studying us inside and out, learning to predict what actions we’ll take, what reactions we’ll have, and what errors we’ll make.

Simply put, the A.I. did not learn to play Go – it learned to play us. You can watch the historic defeat in the video below:

The Future of Artificial Intelligence

This is terrifying.  Not because computers can beat us at the game of Go, but because from this moment forward, we will always be at a disadvantage, facing the arrival of alien intelligences that are thoroughly prepared to out-play us, while we know little about playing them.  These "alien intelligences," whether they are named AlphaGo or Alpha-Finance or Alpha-Geopolitical-Conflict, will beat us at our own games.

This suggests a future where we humans are easily manipulated by intelligent systems that can predict our tendencies and inclinations, our preferences and biases, our actions and reactions, all while finding our weaknesses and exploiting them.  That is what this AlphaGo milestone really means.  And we should all be very concerned.

According to published research, the AlphaGo system was able to correctly predict what move a human will make 57% of the time. Imagine if you could correctly predict what a person would do 57% percent of the time – maybe while negotiating a business deal, or selling a product, or pushing a political agenda.

And remember, we’re not talking about predicting simple yes-no decisions, but sophisticated situations that have many possible options.  Someone with that predictive ability could use it to build an empire of political or economic power.  And that ability now exists.  Yes, it’s limited to a Go board, but that’s just a matter of implementation, not a limitation of the technology.

Read part two in this series here.

ABOUT THE AUTHOR:  Louis Rosenberg, PhD is founder and CEO of Unanimous A.I., a company building intelligent technologies that keep humans in the loop.  Previously, Rosenberg was a tenured professor at California State University (Cal Poly).  He holds a BS, MS, and PhD. from Stanford University.  His doctoral work focused on virtual reality, augmented reality, and robotics.

DISCLAIMER: The views and opinions expressed are solely those of the author. They do not necessarily represent the views of Futurism or its affiliates. 

Share This Article