In Brief
  • Scientists working with Google's DeepMind AI recently tested whether AI were more prone to cooperation or competition.
  • The results show that the AI systems could go either way, depending on the situation. Next stage work is planned to elucidate the rationaile behind the AI's decisions.

Red and Blue

Concerns over artificial intelligence (AI) have been around for some time now, and thanks to a new study by Google’s DeepMind research lab, it seems that this Terminator-esque future of intelligent machines may not be that farfetched.

CLICK HERE FOR FULL INFOGRAPHIC
CLICK HERE FOR FULL INFOGRAPHIC

Using games, a platform that Google’s DeepMind AI is terribly familiar with, researchers have been testing whether neural networks are more likely to cooperate or compete, and if these AI are capable of understanding motivations behind making that choice.

For the research, they used two games with similar scenarios for two AI agents, red and blue.

In the first game, the agents were tasked with trying to gather the most apples (green) in a basic 2D graphical environment. The agents were given the option to tag one another with a laser blast that temporarily removed them from the game. After running the scenario a thousand times, they realized that the agents were willing to cooperate when the apples were abundant, but they turned on each other when the stakes were higher.

The researchers realized that, in a smaller network, the agents were more likely to cooperate. Whereas in a larger, more complex network, the AI were quicker to sabotage one another.

All is Not Lost

In the second scenario, a game called Wolfpack, the agents played as “wolves” that were tasked with capturing a “prey.” When the wolves are close in proximity during a successful capture, the rewards offered were greater. Instead of going all lone wolf, this incentivized the agents to work together.

In a larger network, the agents were quicker to understand that cooperation was the way to go.

The Google researchers hope that the study can lead to AI being better at working with other AI in situations with imperfect information. As such, the most practical application of this research, in the short term, is to “be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet – all of which depend on our continued cooperation,” the study says.

At the very least, the study shows that AI are capable of working together and that AI can make “selfish” decisions.

Joel Leibo, who was the lead author of the paper, outlines the next steps in an interview with Bloomberg, “Going forward it would be interesting to equip agents with the ability to reason about other agent’s beliefs and goals.”