Humans have the ability to think about their action before they act. For example, if a person is about to kick a ball, they may wonder where the ball will go and how likely it is they’ll have to move to the ball’s new location. Robots (especially those that are not equipped with advanced artificial intelligence) are typically incapable of doing this, as they’re often programmed to perform simple tasks.
A team of researchers at the University of California Berkeley have determined that robots can be capable of such perception. To prove it, they’ve developed a new robotic learning technology that enables robots to think ahead in order to “figure out how to manipulate objects they have never encountered before.”
The team has taken to calling this technology “visual foresight” — but no, it doesn’t give robots the ability to predict the future. At least not yet.
The Berkeley researchers applied the technology to a robot called Vestri, enabling it to make predictions about what its cameras will see several seconds into the future. Equipped with new foresight, Vestri demonstrated the ability to move small objects around on a table without touching or knocking over nearby obstacles. The most impressive part, however, was that the technology allowed the robot to perform the small task without human input, supervision, or prior knowledge of physics.
“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” explained Sergey Levine, assistant professor at Berkeley’s Department of Electrical Engineering and Computer Sciences — the lab behind the technology’s development. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”
Visual foresight is based on “convolutional recurrent video prediction,” or dynamic neural advection (DNA). According to the team, DNA-based models are able to predict how the pixels in an image will move from one frame to another based on what the robot does. As Chelsea Finn, a doctoral student in Levine’s lab and inventor of the original DNA model, explained, robots like Vestri can now “learn a range of visual object manipulation skills entirely on their own.”
Frederik Ebert, a graduate student in Levine’s lab who worked on the project compared their work with robots to the way humans learn to interact with objects in their environment:
“Humans learn object manipulation skills without any teacher through millions of interactions with a variety of objects during their lifetime,” said Ebert. “We have shown that it possible to build a robotic system that also leverages large amounts of autonomously collected data to learn widely applicable manipulation skills, specifically object pushing skills.”
Levine notes the capabilities of Vestri are still somewhat limited, though additional work is being done to improve visual foresight. One day, the technology could be used to help self-driving cars while on the road, better equipping them to handle new situations and unfamiliar objects.
The technology needs various improvements before that would be possible, though, such as more refined video prediction and methods to gather more specific video data. Following these advancements, robots may be able to perform more complex tasks such as lifting and placing objects or handling soft and easy to deform objects like cloth or rope. Perhaps one day you won’t even need to fold your own laundry — your robot assistant could do it for you.