A new brain-computer interface device decodes brain activity to figure out what someone is trying to say, and uses that data to synthesize full, audible sentences.
The device is far from perfect and the research is still in its early stages, but the device is the first to recreate a full sentence in a way that was understandable to someone else, according to Scientific American — a ray of hope for people who who’ve lost their ability to communicate from strokes or other conditions.
The University of California, San Francisco researchers behind the device found that trying to directly translate the brain’s behavior into audible speech was too complex, according to research published in the journal Nature on Thursday.
Instead, they used artificial intelligence to correlate signals that the brain sent to participants’ vocal tracts that corresponded with specific vocabulary, ultimately simulating the vocal tract’s behavior to generate realistic-sounding words. In a test run, the device was able to synthesize speech while people were silently mouthing words.
People who listened to and tried to transcribe the machine-generated sentences misunderstood at least one word more than half the time, but the fact that they ever got it right represents progress over existing systems.
“For someone who’s locked in and can’t communicate at all, a few minor errors would be acceptable,” Northwestern University neuroengineer Marc Slutzky, who has pursued similar projects, told SciAm. “Obviously you’d want to [be able to] say any word you’d want to, but it would still be a lot better than having to type out words one letter at a time, which is the [current] state of the art.”
READ MORE: Scientists Take a Step Toward Decoding Thoughts [Scientific American]
More on brain-computer interfaces: This Neural Implant Accesses Your Brain Through the Jugular Vein