Terminating Terminators
In two letters addressed to the heads of state in Australia and Canada, hundreds of experts in the field of artificial intelligence (AI) have urged for the ban of "killer robots," artificially intelligent weapons with the ability to decide whether a person lives or dies. They join a growing crowd of scientists who have stressed the need for an autonomous weapons ban.
The Australian open letter, addressed to Prime Minister Malcolm Turnbull, carried 122 researcher signatures, while the Canadian letter addressed to Prime Minister Justin Trudeau carried was signed by 216.
“Delegating life-or-death decisions to machines crosses a fundamental moral line – no matter which side builds or uses them," said Toby Walsh, Scientia Professor of AI at the University of New South Wales (UNSW) Sydney, to The Independent. "Playing Russian roulette with the lives of others can never be justified merely on the basis of efficacy. This is not only a fundamental issue of human rights. The decision whether to ban or engage autonomous weapons goes to the core of our humanity.”
The letters call for governmental support, at the upcoming United Nations Conference on the Convention on Certain Conventional Weapons (CCW), of an international ban on such weaponry from being developed and deployed.
Walsh explained in a press release, “These will be weapons of mass destruction. One programmer will be able to control a whole army. Every other weapon of mass destruction has been banned: chemical weapons, biological weapons, even nuclear weapons. We must add autonomous weapons to the list of weapons that are morally unacceptable to use.”
Chorus of Experts
Many experts agree on the need to ban these weapons from entering the sphere of war. In August, 116 experts, including SpaceX and Tesla founder Elon Musk, sent an open letter to the United Nations that called for strong AI regulation, especially in the area of AI weaponry.
The letter stated: “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”
Still, not everyone is so sure that an autonomous weapons ban is practical or even possible. According to Greg Allen, coauthor of a report commissioned by the Office of the Director of National Intelligence to explore the implications of AI on war, “You are unlikely to achieve a full ban of autonomous weapons,” he told Wired. “The temptation for using them is going to be very intense.”
Others still don't believe such a ban would even be effective. A study from SUNY Buffalo concluded that killer robots are not inherently a problem; instead, the problem lies in the way society is enabling and researching them: "...instead of demonizing Killer Robots as such, we need to understand the tools, processes and operating procedures that create, support and validate these objects."
Both sides seem to agree that, at the very least, we must be watchful in this early development stage. While these robots have the potential of preserving the lives of any military with the technological sophistication to launch them, the potential for a new level of weapon-based brinkmanship can further agitate an already volatile state of global conflict.
Share This Article