Killer Robots

Killer robots are coming for us all.

But rather than a T-1000 spewing one-liners, autonomous weapon systems like drones could end up replacing human soldiers on the battlefield entirely, given enough time.

But is it ethical to have an artificial intelligence call the shots and decide to take a human life?

Autonomous Weapons

Experts convening at the American Association for the Advancement of Science meeting in Washington DC this week had a clear answer: a resounding no. The group of experts — including ethics professors and human rights advocates — are calling for a ban on the development of AI-controlled weapons, as the BBC reports.

"We are not talking about walking, talking terminator robots that are about to take over the world; what we are concerned about is much more imminent: conventional weapons systems with autonomy," Human Right's Watch advocacy director Mary Wareham told the BBC.

Watching Human Rights

Another big question that arises: who is responsible when a machine does decide to take a human life? Is it the person who made the machine?

"The delegation of authority to kill to a machine is not justified and a violation of human rights because machines are not moral agents and so cannot be responsible for making decisions of life and death," associate professor from the New School in New York Peter Asaro told the BBC.

But not everybody is on board to fully denounce the use of AI-controlled weapon systems. The U.S. and Russia were among several countries that opposed an outright ban on autonomous weapons following a week of talks in Geneva in September.

READ MORE: Call to ban killer robots in wars [BBC]

More on killer robots: The UK Is Developing Autonomous Killer Robots


Share This Article