In BriefThe UK government and its armed forces are making it a policy not to develop and use fully autonomous weapons, or weapons that can make decisions independent from human oversight. The new doctrine was announced weeks after experts warned the UN about such weapons.
Heeding the Warning
It looks like warnings about the dangers of applying artificial intelligence (AI) in weapons development has not fallen on deaf ears. In response to the open letter to the United Nations, sent by 116 experts and led by serial entrepreneur Elon Musk, the government of Great Britain has decided to ban fully autonomous weapons and weapons systems. The news comes in an announcement made by the U.K. Ministry of Defense earlier this week.
Specifically, the British government’s ban extends to the development of weapons that can decide their own targets — yes, like those AI-powered missiles Russia is supposedly working on and those fully autonomous drones Russian arms developer Kalashnikov is building. Fully autonomous weapons are weapons that can “think” for themselves. This doesn’t include remotely operated drones and semi-autonomous defense systems, which armed forces from nations like the United States, South Korea, and even the U.K. currently employ.
For U.K. armed forces minister Mark Lancaster, deciding what to target is a responsibility suitable only for human soldiers. “It’s absolutely right that our weapons are operated by real people capable of making incredibly important decisions, and we are guaranteeing that vital oversight,” Lancaster said regarding the new doctrine.
The announcement was concurrent with the Defense and Security Equipment International show — one of the biggest weapons exhibitions in the world.
The new doctrine published by the Ministry of Defense affirms that “human control of cutting-edge weaponry” is safer for both civilians and military personnel. Such weapons, the announcement said, “will always be under control as an absolute guarantee of human oversight and authority and accountability.” The doctrine also noted that the U.K. “does not possess fully autonomous weapon systems and has no intention of developing them.”
Developing fully autonomous weaponry is definitely appealing to military powers, and Russian President Vladimir Putin even said that he sees future wars being fought using such weapons. That’s as close as we could get to an open declaration of an AI arms race, as Putin himself said that whichever country leads in AI development — weapons included — “will be the ruler of the world.”
Even before Musk’s letter, AI experts warned about this possibility: back in 2015, in an open letter released at the International Joint Conference on Artificial Intelligence (IJCAI). “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control,” the letter stated.
The same warning was echoed in the open letter sent to the U.N., saying that such weapons “can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”
Hopefully, the issue of weapons doesn’t take away from all the positive effects AI can bring. Indeed, human control and oversight are key. But, as U.K. robotics expert Noel Sharkey told The Verge, he hopes this will translate to “human control of weapons in a meaningful and deliberative way.”