As recently as April, the United Kingdom tried to position itself as the world leader for artificial intelligence ethics. Now it’s actively developing the ultimate ethical no-no: fully-autonomous weapons systems and fighter drones.
Officially, the U.K. government’s public stance is that it has no interest in developing autonomous weapons — but it refuses to join most other U.N. members and ban the technology outright.
Maybe that’s because, according to a new story in the Guardian, the U.K. government is funding dozens of research programs working to bring together the underlying technology of autonomous drones, decision-making AI, and strategic weapons systems into military killbots.
The Ministry of Defense has suggested that AI-powered autonomous weapons may be feasible to make and effective in combat by 2030. And if the report that The Guardian was covering, titled “Off the Leash: How the UK is developing the technology to build armed autonomous drones” is to be believed, we’d all be better off if they stopped.
Twelve years is a very short time to put human lives in the hands of an algorithm. Especially one that’s built specifically to end human lives. Facial recognition software used by police are notorious for false positives and can be easily fooled. And algorithms reflect the same biases and prejudices of the people who train them — even the most objective AI systems are subject to whatever axes their programmers have to grind.
Consider those two problems when it comes to an algorithm built by the military specifically to find and kill enemy combatants and other targets. Any misstep could be horrifying — and given how frequently algorithms game their own rules, there will almost certainly be catastrophic errors if these machines are ever used.
READ MORE: Britain funds research into drones that decide who they kill, says report [The Guardian]
More on unethical artificial intelligence: Five Experts Share What Scares Them the Most About AI