While some researchers attempt to build artificial intelligences (AI) that can solve problems that humans might not have even thought of yet, others are focused on creating ones that do something most of us take for granted: pick things up.

For a robot, knowing how to properly grasp and lift an object is no easy task. To address this issue, researchers at the University of California, Berkeley, trained a deep learning system on a cloud-based data set of more than a thousand objects, exposing it to each one's 3D shape and appearance, as well of the physics of grasping it.

Afterward, they tested their system using physical objects that weren't included in its digital training set. When the system thought it had a better than 50 percent chance of successfully picking up a new object, it was actually able to do it 98 percent of the time — all without having trained on any objects outside of the virtual world.

The researchers have submitted their work for publication. They plan to publicly release their data set, which should help others create their own dexterous robots and perhaps even inspire a few innovators to think of other ways to use the virtual world for training AI systems.

“It's hard to collect large data sets of robotic data,” Stefanie Tellex, an assistant professor specializing in robot learning at Brown University, explained to MIT Technology Review. “This paper is exciting because it shows that a simulated data set can be used to train a model for grasping. And this model translates to real successes on a physical robot.”


Share This Article