ALIGNING VALUES

The scientific community is observing the rapid rise of artificial intelligence (AI) and the existential questions it has brought along with it. Fears about whether robots could act unethically and choose to harm humans is a major rallying point for bans on robotics research.

To assuage these concerns, some researchers are asking whether we can teach AIs ethical behavior instead. This is difficult, however, because there is no user manual for being human and moral.

Researchers Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology believe that the answer may lie in "Quixote". Quixote teaches "value alignment" to robots by training them to read stories, learn acceptable sequences of events and understand successful ways to behave in human societies.

"The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature," says Riedl. "We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won't harm humans and still achieve the intended purpose."

Quixote is a technique in which an AI's goals are aligned with human values by placing reward on socially appropriate behavior. It is an improvement upon Riedl's prior research—the Scheherazade system—which was a study on how artificial intelligence can gather a correct sequence of actions by crowdsourcing story plots from the Internet.

LEARNING LESSONS
Source: Georgia Tech

 

In their paper, Reidl and Harrison lay out how Quixote can be used to teach human values to artificial agents. Scheherazade is first used to teach what is a normal or "correct" plot graph. This is then passed to Quixote which translates it into a "reward signal" that reinforces certain behaviors and punishes other behaviors during trial-and-error learning.

A hypothetical example is a robot tasked with picking up emergency over the counter medication as quickly as possible. The robot has the choice to take the medicine and leave without paying, interact politely with the pharmacists, or wait in line. Without the value alignment and positive reinforcement Quixote provides, the robot would simply grab and dash the medicine. By rewarding the robot for waiting patiently, Quixote performs the needed correction to ensure that the robot is in line with human values.

Quixote can be used to map out potential unethical actions that an agent may take and allow future researchers to adjust accordingly. The Quixote technique is best for robots that have a limited purpose but need to interact with humans to achieve it, and is a primitive first step toward general moral reasoning in AI.

"We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior," says Reidl. "Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual."


Share This Article