Programming Morality

A new study from The Institute of Cognitive Science at the University of Osnabrück has found that the moral decisions humans make while driving are not as complex or context dependent as previously thought. Based on the research, which has been published in Frontiers in Behavioral Neuroscience, these decisions follow a fairly simple value-of-life-based model, which means programming autonomous vehicles to make ethical decisions should be relatively easy.

Click to View Full Infographic

For the study, 105 participants were put in a virtual reality (VR) scenario during which they drove around suburbia on a foggy day. They then encountered unavoidable dilemmas that forced them to choose between hitting people, animals, and inanimate objects with their virtual car.

The previous assumption was that these types of moral decisions were highly contextual and therefore beyond computational modeling. "But we found quite the opposite," Leon Sütfeld, first author of the study, told Science Daily. "Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object."

Better Than Human

A lot of virtual ink has been spilt online concerning the benefits of driverless cars. Elon Musk is in the vanguard, stating emphatically that those who do not support the technology are “killing people.” His view is that the technology can be smarter, more impartial, and better at driving than humans, and thus able to save lives.

Currently, however, the cars are large pieces of hardware supported by rudimentary driverless technology. The question of how many lives they could save is contingent upon how we choose to program them, and that's where the results of this study come into play. If we expect driverless cars to be better than humans, why would we program them like human drivers?

As Professor Gordon Pipa, a senior author on the study, explained, "We need to ask whether autonomous systems should adopt moral judgements. If yes, should they imitate moral behavior by imitating human decisions? Should they behave along ethical theories, and if so, which ones? And critically, if things go wrong, who or what is at fault?"

The ethics of artificial intelligence (AI) remains swampy moral territory in general, and numerous guidelines and initiatives are being formed in an attempt to codify a set of responsible laws for AI. The Partnership on AI to Benefit People and Society is composed of tech giants, including Apple, Google, and Microsoft, while the German Federal Ministry of Transport and Digital Infrastructure has developed a set of 20 principles that AI-powered cars should follow.

Just how safe driverless vehicles will be in the future is dependent on how we choose to program them, and while that task won't be easy, knowing how we would react in various situations should help us along the way.


Share This Article