Image by Getty / Futurism

New advances in brain-computer interface (BCI) technology may make speech for those who've lost the ability to do so easier than ever before.

In a new, groundbreaking study published in the journal Cell, researchers from Stanford University claimed that they have found a way to decode the "inner speech" of those who can no longer vocalize, making it far less difficult to talk with friends and family than previous BCIs that required them to exert ample effort when trying to speak.

Stanford neuroscientist and coauthor Erin Kunz told the New York Times that the idea of translating inner speech stemmed from care for subjects of BCI experiments, many of whom have diseases like amyotrophic lateral sclerosis (ALS) that weaken their airway muscles and eventually make speech all but impossible.

Generally speaking, BCIs for people with ALS and other speech-inhibiting disorders require them to attempt to speak and let a computer do the rest of the work, but as Kunz and her colleagues noticed, they often seemed worn down under the strain of such attempts.

What if, the scientists wondered, the BCIs could simply translate their thoughts to be said out loud directly?

"If we could decode that, then that could bypass the physical effort," Kunz told the NYT. "It would be less tiring, so they could use the system for longer."

Casey Harrell, an ALS patient and volunteer in the long-running BCI clinical trial, had already done the hard work of attempting speech while the electrodes in his brain recorded his neurological activity before becoming one of the four subjects in the inner speech portion of the study.

Last summer, Harrell's journey back to speech made headlines after his experimental BCI gave him back the ability to talk using only his brainwaves, his attempts at speech, and old recordings from podcast interviews he coincidentally happened to have given before ALS made such a feat impossible.

In the newer portion of the study, however, researchers found that their computers weren't great at decoding what words he was thinking. Thusly, they went back to the drawing board and began training their bespoke AI models to successfully link thoughts to words, making the computer able to translate such complex sentences as "I don't know how long you’ve been here" with far more accuracy.

As they began working with something as private as thoughts, however, the researchers discovered something unexpected: sometimes, the computer would pick up words the study subjects were not imagining saying aloud, essentially broadcasting their personal thoughts that were not meant to be shared.

"We wanted to investigate if there was a risk of the system decoding words that weren’t meant to be said aloud," Kunz told the NYT — and it seems that they got their answer.

To circumvent such an invasion of mental privacy — one of the more dystopian outcomes people fear from technologies like BCIs — the Stanford team selected a unique "inner password" that would turn the decoding on and off. This mental safe word would have to be unusual enough that the computer wouldn't erroneously pick up on it, so they went with "Chitty Chitty Bang Bang," the title of a 1964 fantasy novel by Ian Fleming.

Incredibly, the password seemed to work, and when participants imagined it before and after whatever phrase they wanted to be played aloud, the computer complied 98.75 percent of the time.

Though this small trial was meant, as Kunz said, to be "proof-of-concept," it's still a powerful step forward, while simultaneously ensuring the privacy of those who would like only for some of their thoughts to be said out loud.

More on mind-reading: Scientists Find Evidence That Memories in Brain Are Physically Moving Around


Share This Article