"But I am also concerned about the risks that new technologies pose, especially if they are exploited by malicious people."
If we really want to prevent the rise of autonomous weapons — killer robots that can pull the trigger without needing a human's approval — then engineers will actually need to stop working towards them.
So argues Christoffer Heckman, a University of Colorado Boulder computer scientist who's funded by DARPA, the Pentagon's research division, in an essay in The Conversation. It may sound like an obvious solution, but Heckman points out that it's sometimes hard for researchers to predict how their work might get used or abused in the future.
In his essay, Heckman offered a few possible ways engineers might be able to stop the development of autonomous weaponry — but they all require unanimous support.
Heckman first proposed curbing killer robots through regulations or voluntary agreements among researchers, but argues that those wouldn't be able to keep up with the pace of science or might prevent engineers from developing beneficial autonomous systems.
The most successful route, Heckman believes, is self-regulation through university-wide review boards that would evaluate and approve research without arbitrarily banning certain topics — an unusual approach when it comes to preventing a very specific outcome like killer robots.
"I feel that the potential for good is too promising to ignore," Heckman wrote about autonomous technology. "But I am also concerned about the risks that new technologies pose, especially if they are exploited by malicious people. Yet with some careful organization and informed conversations today, I believe we can work toward achieving those benefits while limiting the potential for harm."
READ MORE: Robotics researchers have a duty to prevent autonomous weapons [The Conversation]
More on autonomous weapons: Experts: It'd Be "Relatively Easy" To Deploy Killer Robots by 2021