Who’s a good AI?
Google’s DeepMind is tapping into its inner child as scientists are training the AI to learn about the physical properties of objects by interacting with them.
In a paper currently under review, researchers at DeepMind and the University of California, Berkeley, describe how they gave the AI two tasks to complete. One task was to determine the heaviest amongst five blocks, and the other was to determine the total number of blocks stacked as a tower. In both experiments, the AI was eventually able to work out that it needed to interact with the blocks in order to arrive at the correct answer.
To train the AI for these experiments, researchers rewarded it when it answered correctly and penalized it when it gave a wrong answer. This technique of using a reward and punishment system is called reinforcement training, and it has been used before to train AI in playing games like Star Craft.
“Reinforcement learning allows solving tasks without specific instructions, similar to how animals or humans are able to solve problems,” the University of Sheffield’s Eleni Vasilaki tells New Scientist. “As such, it can lead to the discovery of ingenious new ways to deal with known problems, or to finding solutions when clear instructions are not available.”
Jiajun Wu at the Massachusetts Institute of Technology echoes the same sentiments regarding the usefulness of reinforcement learning for AI. He sees its use in training AI to navigate through a number of situations, such as going through tough terrain. This could be especially useful for rescue robots or for rovers on other planets, increasing the effectiveness of AI in environments humans can’t safely navigate.
“Any application where machines need an understanding of the world that goes beyond passive perception could benefit from this work,” say Denil. However, he also points out that concrete applications are still far off. For now, AI is going to need to keep taking a lot of baby steps before it reaches the point when it’s ready to leave the nest.