Integrated Systems

Artificial intelligence (AI) has advanced by leaps and bounds, and it's easy to see why there's a profound need to regulate its implementation in scenarios like international warfare. However, there is also a push to install a framework that governs how AI might be used on a more personal basis.

A brain-computer interface might allow a paralyzed person to move a robotic arm, or a person with a spinal cord injury to control a motorized wheelchair. But what if there's a malfunction that causes an unforeseen accident? Is the user or the technology at fault?

An article published in the journal Nature describes an imminent future where "it will be possible to decode people’s mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions." In order to make sure that this technology helps those who need it without disastrous consequences, there is a need for rules and regulations.

Pitfalls of Progress

While neurotechnologies aren't yet commercially commonplace, the field is evolving all the time. The 25 co-authors of the Nature article raised four primary concerns, which explain why legislation is needed now.

Some concerns are relatively obvious, like how a person must retain their own agency when interfacing with a computer, or the importance of ensuring that the proper measures are taken to eliminate bias in the implementation of this emerging technologies. The paper also recommends that limitations be established when it comes to personal augmentation, especially in a military context.

There's also the question of privacy. A brain-computer interface offers up all kinds of new ways to harvest a person's most intimate data. Most of us would want that information to be kept private, but our current online behavior might be setting a different precedent.

"Our group shared the conviction that people often now give up privacy rights without fully realizing what they are surrendering, or what can be learned about them – or done to them – on the basis of what they have surrendered," said Sara Goering, an associate professor of philosophy at the University of Washington and co-author of the article, in correspondence with Futurism via email. "When greater access is provided to neural data and our internal brain states that, at least for now, remain a kind of 'last frontier' of fully private space, we will be giving up privacy in an even more profound way."

Brain Bill

The team behind this article certainly recognizes the immense benefits of neurotechnologies and A.I., but they staunchly believe that there's work to be done if these advances are to be adopted in ways that are both ethical and socially beneficial.

"It’s tough to talk about the tech community in general, given the wide variety of players within it," said Goering. "But I would say that generally they may work to produce new devices or products that they believe people are likely to want, without fully considering how the same devices or products might be used problematically, or how they should be regulated."

She compared the situation to the way that medical doctors are trained, and the principles of ethics that they pledge to uphold. Given the enormous impact that new technologies have on society, there's perhaps an argument to be made that people working in the industry should be held to a similar standard.

Of course, lawmakers have their own role to play in ensuring that technology is a supplement, not a detriment – and there are already signs that regulations are going to be put into place.

"Without intentional efforts to create an international agreement, it is much more likely that we will have a legislation and policy made on a country-by-country basis, but our hope is that we can motivate attention to this issue at a global level," explained Goering. "A variety of national brain initiatives are already taking place within individual countries, but ethics and policy efforts within each of them stand to gain greatly from shared attention to the relevant issues."


Share This Article