A Moral Machine?

As artificial intelligence (AI) development progresses, experts have begun considering how best to give an AI system an ethical or moral backbone. A popular idea is to teach AI to behave ethically by learning from decisions made by the average person.

To test this assumption, researchers from MIT created the Moral Machine. Visitors to the website were asked to make choices regarding what an autonomous vehicle should do when faced with rather gruesome scenarios. For example, if a driverless car was being forced toward pedestrians, should it run over three adults to spare two children? Save a pregnant woman at the expense of an elderly man?

The Moral Machine was able to collect a huge swath of this data from random people, so Ariel Procaccia from Carnegie Mellon University's computer science department decided to put that data to work.

In a new study published online, he and Iyad Rahwan — one of the researchers behind the Moral Machine — taught an AI using the Moral Machine's dataset. Then, they asked the system to predict how humans would want a self-driving car to react in similar but previously untested scenarios.

Effectively, Proccacia wanted to demonstrate how a voting-based system could provide a solution to the ethical AI question, and he believes his algorithm can effectively infer the collective ethical intuitions present in the Moral Machine's data. “We are not saying that the system is ready for deployment,” he told The Outline. “But it is a proof of concept, showing that democracy can help address the grand challenge of ethical decision making in AI.”

Crowdsourced Morality

This idea of having to choose between two morally problematic outcomes isn't new. Ethicists even have a name for it: the double-effect. However, having to apply the concept to an artificially intelligent system is something humankind has never had to do before, and numerous experts have shared their opinions on how best to go about it.

OpenAI co-chairman Elon Musk believes that creating an ethical AI is a matter of coming up with clear guidelines or policies to govern development, and governments and institutions are slowly heeding Musk's call. Germany, for example, crafted the world's first ethical guidelines for self-driving cars. Meanwhile, Google parent company Alphabet's AI DeepMind now has an ethics and society unit.

Other experts, including a team of researchers from Duke University, think that the best way to move forward is to create a "general framework" that describes how AI will make ethical decisions. These researchers believe that aggregating the collective moral views of a crowd on various issues — like the Moral Machine does with self-driving cars — to create this framework would result in a system that's better than one built by an individual.

However, this type of crowdsourced morality isn't foolproof. One sample group may have biases that wouldn't be present in another, and different algorithms can be presented the same data but arrive at different conclusions.

For Cornell School of Law professor James Grimmelmann, who specializes in the dynamic between software, wealth, and power, the idea of crowdsourced morality itself is inherently flawed. “[It] doesn't make the AI ethical," he told The Outline. “It makes the AI ethical or unethical in the same way that large numbers of people are ethical or unethical.”

For Proccacia, these limitations are valid, and he acknowledges that their research is still only a proof of concept. However, he believes a democratic approach to building a moral AI could work. “Democracy has its flaws, but I am a big believer in it," he said. "Even though people can make decisions we don’t agree with, overall democracy works.”


Share This Article