Killer AI could do "calamitous things that they were not originally programmed for."

Warning Signs

If governments turn control of their weapons systems over to fully-autonomous machines, we may face devastating, unintentional calamities or acts of war.

So warns Laura Nolan, a former Google software engineer who left the company in protest of Project Maven, Google's since-abandoned AI development program for military drones. She told The Guardian this week that there should always be a human finger on the trigger — lest the tech do "calamitous things that they were not originally programmed for."

Collateral Damage

Major military powers including Russia, the U.K., and the U.S. have invested heavily in autonomous weapons, military drones, and battlefield robots. But the idea isn't as popular outside of military circles, with growing numbers of scientists calling on governments to ban fully-autonomous weaponry.

"You could have a scenario where autonomous weapons that have been sent out to do a job confront unexpected radar signals in an area they are searching," Nolan told The Guardian, illustrating a hypothetical problem area, suggesting that a machine might mistake hunters for enemy combatants and open fire.

"Very few people are talking about this but if we are not careful one or more of these weapons, these killer robots, could accidentally start a flash war, destroy a nuclear power station and cause mass atrocities," Nolan added.

READ MORE: Ex-Google worker fears 'killer robots' could cause mass atrocities [The Guardian]

More on autonomous weapons: Experts: It'd Be "Relatively Easy" to Deploy Killer Robots by 2021


Share This Article