In the future, some researchers hope people who lose the use of limbs will be able to control robotic prostheses using brain-computer interfaces — like Luke Skywalker did effortlessly in “Star Wars.”
The problem is that brain signals are tricky to decode, meaning that existing brain-computer interfaces that control robotic limbs are often slow or clumsy.
But that could be changing. Last week, a team of doctors and neuroscientists released a paper in the journal Nature Medicine about a brain-computer interface that uses a neural network to decode brain signals into precise movements by a lifelike, mind-controlled robotic arm.
The researchers took data from a 27-year-old quadriplegic man who had an array of microelectrodes implanted in his brain, and fed it into a series of neural nets, which are artificial intelligence systems loosely modeled after our brains’ circuits that excel at finding patterns in large sets of information.
After training sessions over the course of nearly two and a half years, the neural networks got pretty good at identifying which brain signals were related to specific muscular commands and how to relay them to the robotic limb.
Not only did the neural net let the patient move the robotic arm with better accuracy and less delay than existing systems, but it did even better when the researchers let it train itself. That is, the neural net was able to teach itself which brain signals corresponded to which arm movements more effectively without any hints from the researchers.
With the neural net, the volunteer in the experiment was able to pick up and manipulate three small objects with the robotic hand — an ability that’s easily taken for granted but often eludes those who rely on prosthetic limbs to navigate daily life.
READ MORE: Building a better brain-computer interface [MedicalXpress]
More on Brain-Computer Interfaces: Military Pilots Can Control Three Jets At Once Via A Neural Implant