Teaching a robot how to play Jenga is a lot more difficult than it sounds.
Rather than relying on visual information alone, players have to poke, tap, and feel individual wooden blocks to choose which one to remove from the tower. But thanks to machine learning algorithms, MIT researchers were able to teach a robot how to successfully play Jenga by only giving it a basic set of instructions — an impressive victory for tactile robotics.
Monkey See, Monkey Touch
The research paper, published by journal Science Robotics today, describes how the robot takes a thorough look at the tower to examine the state of each block. Then it figures out its next move for “successful extraction” of pieces by predicting a block’s future state.
It can either push or pull a piece, a single millimeter at a time. Force sensors help continually analyze the situation to figure out if something is wrong or if a tower collapse is imminent.
The robot can learn from past mistakes, adjusting its behavior after the tower collapses by “building nuggets of experience,” as senior author Alberto Rodriguez told Popular Science. In other words, it knows what a successful move “feels” like.
The robot plays similarly to how us humans play Jenga: we come up with a strategy, predict the outcome — avoiding the collapse of the tower in the process — and figure out which piece to remove through feel.
It’s a great and playful example of the power of machine learning. But the researchers’ robot won’t win any major Jenga competitions.
It’s “good enough so that it could play against a human,” but won’t “achieve superhuman performance,” Rodriguez told Popular Science.
READ MORE: MIT is teaching a robot to beat you at Jenga [Popular Science]
More on machine learning: Hellishly Hard New Game Is Specifically Designed to Confound AI