Representatives from countries around the world met on Nov. 18 to discuss weapons systems at the United Nations’ Convention on Conventional Weapons (CCW). One point of particular interest at the meeting was a call by 22 nations to place an outright ban on the development and utilization of automated weapons, also known as “killer robots.”
Leading up to the convention, hundreds of experts in the field of artificial intelligence (AI) and robotics joined in sending letters to world leaders, urging them to support a ban on autonomous weapons. Elon Musk, founder of OpenAI and CEO of Tesla and SpaceX has also been pushing for the regulation of autonomous weapons development.
The meeting may have been less productive than these groups hoped. They were mainly able set groundwork for future talks, likely to occur sometime next year. Mary Wareham, advocacy director of the Arms Division at the Human Rights Watch and global coordinator for the Campaign to Stop Killer Robots told AFP, “Countries do not have time…to waste just talking about this subject.” She says that militaries and defense companies are already investing heavily in bringing these weapons into reality.
However, the chair of the meeting, Amandeep Gill, India’s disarmament ambassador, tried to clear away some of the hype surrounding the issue. “Ladies and gentlemen, I have news for you: the robots are not taking over the world. Humans are still in charge,” he exclaimed, according to reporting from The Guardian. “I think we have to be careful in not emotionalizing or dramatizing this issue.”
According to the Campaign to Stop Killer Robots, the meeting did lead to two points of agreement along these lines: Most nations assented that we need a “legally binding instrument” controlling the use of these technologies and that the majority of “states now accept that some form of human control must be maintained over weapons systems.” Talks moving forward will have to focus on what these points of accordance will look like in practice.
Autonomous weapons will have a profound impact on the way war is waged, and the arms escalation this could drive has motivated some, especially nations with smaller military budgets, to call for regulation (at the least). Toby Walsh, an expert on AI at the University of New South Wales in Australia, did not mince words regarding his feelings on the topic.
“These will be weapons of mass destruction,” Walsh told reporters during a separate event at the UN. “I am actually quite confident that we will ban these weapons … My only concern is whether [countries] have the courage of conviction to do it now, or whether we will have to wait for people to die first.”
While international agreements on the development and use of autonomous weapons are ideal, individual countries are also making their intentions known. In response to the letter from Musk and others, the United Kingdom has already decided to ban fully autonomous weapons. An announcement was handed down from the U.K. Ministry of Defense in September.
But Musk’s concerns for the future of AI are not relegated to weapons applications, as he believes that AI development, in general, should be closely watched and regulated. “I think anything that represents a risk to the public deserves at least insight from the government, because one of the mandates of the government is the public wellbeing,” he said at a conference call with Tesla investors.
AI is a foreboding specter over the uncertain future. Many experts, like Ray Kurzweil, try to counter arguments for dampening AI development with promises that AI will “enhance us.” Even so, any good technology could also have destructive applications. Ensuring that the awesome potential of these technologies are developed in a way that is genuinely good for all of humanity is, unsurprisingly, the best way forward.