The Trolley Problem

MIT associate professor Iyad Rahwan is educating the public about the ethical issues facing self-driving cars using “MIT's Moral Machine” — a website that crowdsources responses from people by posing ethical conundrums. With this tool, Rahwan has asked more than 3 million people to consider “The Trolley Problem,” one of several sticky moral dilemmas facing creators of self-driving cars and policymakers alike.

The Trolley problem is this — five people are trapped on a track and cannot move. A runaway trolley is barreling toward them. You can pull a lever and send the trolley to a side track where only one person will be killed. What should you do?

For Rahwan, an expert in the intersection of the computer and social sciences, the social aspects of artificial intelligence (AI) are the perfect place to focus collective intelligence. He believes the Trolley problem in the context of self-driving cars is more complicated, because the ethical burden in this lose-lose situation which includes loss of life is no longer placed on a person. We are taking ethics to a new level by giving a robot permission to make that choice.

“The idea of a robot having an algorithm programmed by some faceless human in a manufacturing plant somewhere making decisions that has life-and-death consequence is very new to us as humans,” Rahwan told Business Insider.

Crowdsourcing Ethical Choices

While it is true that this is a new wrinkle to consider, since any self-driving car making such a choice would be responding to programming, it would still be carrying out the instructions provided to it by humans. In other words, it would be executing a human's decision about what to do in that situation.

And while different people might make different choices depending on the finer details of each situation, the parsing out of these scenarios can at least lend something that is both new and positive to this debate: predictability. As humans debate and choose outcomes based on these kinds of ethical questions, they will program favored responses into robots — and we will know what those are and what to expect on the road.

Rahwan has worked to highlight the challenges of determining what should happen when self-driving cars get into accidents. Since the Moral Machine site launched in August 2016, more than 3 million people around the world have contributed 26 million decisions. Rahwan doesn't argue in favor of programming for specific outcomes, but does believe that more specific ethical guidelines must be developed to maintain public trust.

The ethics for self-driving cars will eventually have to be considered, because the technology is coming — and for good reason. Human error is the cause of 95 percent of all traffic fatalities, and “recognition errors” cause 41 percent of all human error fatalities. These recognition errors are what the DOT calls “driver’s inattention, internal and external distractions, and inadequate surveillance.”

While self-driving cars are forcing us to ask ourselves uncomfortable moral questions, they are poised to correct the vast majority of traffic accidents. At least one expert thinks they will also eliminate traffic jams by 2030. For many, these benefits outweigh the cost of breaking the taboo against outlining concrete answers to controversial ethical conundrums.


Share This Article