"Toddler" Robot

For a baby, knowing how to stand, let alone walk, represents a huge learning achievement. Such a feat is yet to be perfectly duplicated in machines, but we’re getting there. Darwin is a robot at the University of California, Berkley, and instead of being given specific programs for new tasks, he’s programmed to learn how to perform them. The robot learns to perform a new task by using a process similar to the neurological processes in childhood learning. Like a toddler, Darwin tries to learn how to stand, to move its hand to perform reaching motions, and to stay upright when the ground beneath it tilts

Darwin

Using deep-learning networks, Darwin is controlled by several simulated neural networks that mimic the learning process in human brains. For it to perform a new task, it runs a series of simulations in order to train a high-level deep-learning network how to perform it, a process that researchers liken to imagination. A second deep-learning network then carries out the task as it responds to the dynamics of the robot’s joints and the challenges of interacting with the real world. The second network is present because when the first network tries, for example, to move a leg, the friction experienced at the point of contact with the ground may imbalance it, causing the robot to fall.

Such a technique in machine intelligence is crucial, as there may be times wherein a robot does not have time for extensive periods of trial and error. Simulations also lack the complexities found in the real world. The new approach could prove useful for any robot working in all sorts of real environments that simulations fail to appropriately mimic.


Share This Article