Ever since Tesla CEO and founder Elon Musk announced his plans to develop a brain-computer interface (BCI) through his Neuralink startup, BCI technologies have received more attention. Musk, however, wasn’t the first to propose the possibility of enhancing human capabilities through brain-computer interfacing. A number of other startups are working on a similar goal, including Braintree founder Bryan Johnson with Kernel. Even the U.S. Defense Department’s Defense Advanced Research Projects Agency (DARPA) is working on one.
Now, according to a collaboration of 27 experts—neuroscientists, neurotechnologists, clinicians, ethicists and machine-intelligence engineers—calling themselves the Morningside Group, BCIs present a unique and rather disturbing conundrum in the realm of artificial intelligence (AI). Essentially designed to hack the brain, BCIs themselves run the risk of being hacked by AI.
“Such advances could revolutionize the treatment of many conditions, from brain injury and paralysis to epilepsy and schizophrenia, and transform human experience for the better,” the experts wrote in a comment piece in the journal Nature. “But the technology could also exacerbate social inequalities and offer corporations, hackers, governments or anyone else new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency and an understanding of individuals as entities bound by their bodies.”
The experts used the analogy of a paralyzed man who participates in a BCI trial but isn’t fond of the research team working with him. An artificial intelligence could then read his thoughts and (mis)interpret his dislike for the researchers as a command to cause them harm, despite the man not having given such a command explicitly.
The explained it further:
Technological developments mean that we are on a path to a world in which it will be possible to decode people’s mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions; where individuals can communicate with others simply by thinking; and where powerful computational systems linked directly to people’s brains facilitate their interactions with the world such that their mental and physical abilities are greatly enhanced.
Concerns of Ethics in Artificial Intelligence
In order to prepare for this eventuality, the Morningside Group proposed four ethical considerations that need to be addressed: privacy and consent, agency and identity, augmentation, and bias. “For neurotechnologies to take off in general consumer markets, the devices would have to be non-invasive, of minimal risk, and require much less expense to deploy than current neurosurgical procedures,” they wrote.
“Nonetheless, even now, companies that are developing devices must be held accountable for their products, and be guided by certain standards, best practices and ethical norms.” These become even more crucial when considering how “profit hunting will often trump social responsibility” when it comes to the pursuit of technology, according to human history.
One of the potential uses for BCIs is in the workplace. As Luke Tang, the general manager for AI technologies accelerator TechCode, noted in a commentary sent to Futurism: “I believe the biggest vertical in which this technology has a play is in the business setting – the brain-machine will shape our future workplaces.” Concretely, BCI technologies could improve remote collaboration, increase knowledge, and enhance communication.
For the latter, BCI would work as a “[t]echnology that can translate your thoughts into speech or actions will no doubt prove transformative to today’s tech-enabled communication methods. Brain-machine technology can lead to a faster and more accurate flow of communication.” Tang said.
It’s precisely this ability to delve into a person’s thoughts that could present a challenge for BCIs as technologies like artificial intelligence become significantly more advance. In order for us not to lose all the potential that BCIs can offer, it’s important to have the right considerations. “The possible clinical and societal benefits of neurotechnologies are vast,” the Morningside researchers concluded. “To reap them, we must guide their development in a way that respects, protects and enables what is best in humanity.”
Disclosure: Bryan Johnson is an investor in Futurism; he does not hold a seat on our editorial board or have any editorial review privileges.