Moral Algorithms for Autonomous Vehicles

With the rising popularity of self-driving cars, economists are now anticipating the moral and ethical issues that abound when machines take over the road. In an extreme and unfortunate scenario where a self-driving car is headed for a crowd of ten people and it cannot feasibly stop in time, but can avoid the crowd only by steering into a wall, a team led by Jean-Francois Bonnefon of Toulouse School of Economics in France are studying what the car should do. “Our results provide but a first foray into the thorny issues raised by moral algorithms for autonomous vehicles,” says the team.

Robot Ethics

Bonnefon and his team are addressing the ethical dilemma by gauging public opinion, and presenting these types of scenarios to several hundred workers on Amazon’s Mechanical Turk. Their results say that people generally are comfortable with the idea of self-driving cars being programmed to minimize the loss of life, but only to an extent. The team concludes that participants “were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves.” The team says that these studies are only the first few steps looking into the moral issues of automated vehicles, that will also have to tackle the nature of uncertainty and the assignment of blame. “As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent”, says the team.


Share This Article