Last week, many of the major players in the artificial intelligence world signed a pledge to never build or endorse artificial intelligence systems that could run an autonomous weapon. The signatories included: Google DeepMind’s cofounders, OpenAI founder Elon Musk, and a whole slew of prominent artificial intelligence researchers and industry leaders.

The pledge, put forth by AI researcher Max Tegmark’s Future of Life Institute, argues that any system that can target and kill people without human oversight is inherently immoral, and condemns any future AI arms race that may occur. By signing the pledge, these AI bigwigs join the governments of 26 nations including China, Pakistan, and the State of Palestine, all of which also condemned and banned lethal autonomous weapons.

So if you want to build a fighter drone that doesn’t need any human oversight before killing, you’ll have to do it somewhere other than these nations, and with partners other than those who signed the agreement.

Yes, banning killer robots is likely a good move for our collective future — children in nations ravaged by drone warfare have already started to fear the sky — but there’s a pretty glaring hole in what this pledge actually does.

Namely: there are more subtle and insidious ways to leverage AI against a nation’s enemies than strapping a machine gun to a robot’s arm, Terminator-style.

The pledge totally ignores the fact that cybersecurity means more than protecting yourself from an army of killer robots. As Mariarosaria Taddeo of the Oxford Internet Institute told Business Insider, AI could be used in international conflicts in more subtle but impactful ways. Artificial intelligence algorithms could prove effective at hacking or hijacking networks that are crucial for national security.

Already, as Taddeo mentioned, the UK National Health Service was held hostage by the North Korea-linked WannaCry virus and a Russian cyberattack took control of European and North American power grids. With sophisticated, autonomous algorithms at the helm, these cyberattacks could become more frequent and more devastating. And yet, because these autonomous weapons don’t go “pew pew pew,” the recent AI pledge doesn’t mention (or pertain to) them at all.

Of course, that doesn’t make the pledge meaningless. Not by a long shot. But just as important as the high-profile people and companies that agreed to not make autonomous killing machines are the names missing from the agreement. Perhaps most notably is the U.S. Department of Defense, which recently established its Joint Artificial Intelligence Center (JAIC) for the express purpose of getting ahead for any forthcoming AI arms races.

"Deputy Secretary of Defense Patrick M. Shanahan directed the DOD Chief Information Officer to standup the Joint Artificial Intelligence Center (JAIC) in order to enable teams across DOD to swiftly deliver new AI-enabled capabilities and effectively experiment with new operating concepts in support of DOD's military missions and business functions," Heather Babb, Department of Defense spokesperson, told Futurism.

“Plenty of people talk about the treat from AI; we want to be the threat,” Deputy Defense Secretary Patrick Shanahan wrote in a recent email to DoD employees, a DoD spokesperson confirmed to Futurism.

The JAIC sees artificial intelligence as a crucial tool for the future of warfare. Given the U.S.’s hawkish stance on algorithmic warfare, it’s unclear if a well-intentioned, incomplete pledge can possibly hold up.

More on pledges against militarized AI: Google: JK, We’re Going To Keep Working With The Military After All


Share This Article