TheDigitalArtist/Victor Tangermann
Trust Fall Time

Scientists Want to Teach Robots to Know When to Trust Humans

Game theory research attempts to build a framework for human-robot trust.

Dan RobitzskiDecember 7th 2018

Robot Overlords

If advanced robotics become ubiquitous in society, we need to know that we can trust them.

At the same time, we need to make sure robots trust us mere humans in matters they’re not equipped to handle, researchers argue in a paper published last month in the academic journal ACM Transactions on Interactive Intelligent Systems. The work — a collaboration of Penn State, MIT, and Georgia Institute of Tech scientists — is an attempt to develop a definition and model of trust that could easily translate into software code. After all, robots can’t get a “gut feeling” to trust someone the way humans do.

Here’s the somewhat self-referential definition on which they settled: “a belief, held by the trustor, that the trustee will act in a manner that mitigates the trustor’s risk in a situation in which the trustor has put its outcomes at risk.”

Follow the Leader

To reach that definition, the scientists needed to find out the extent to which humans are already willing to trust robots — and vice versa. Over the course of four years, they watched as more than 2,000 human participants played out scenarios that challenged their view of robotics.

In some, participants had to choose whether or not they would follow a robot out of a burning building. In some cases, though, they had seen the robot’s navigation glitch just moments prior. In other scenarios, participants had to decide whether they would help a robot that was asking for help entering a room under lockdown.

Conversely, some experiments tested how much robots were willing to trust humanity. In a high-tech twist on a classic game theory experiment,humanoid NAO robots acted as lenders and chose how much money to lend to a human based on how much money the human paid back each round. Over time, the robots picked up on each human’s pattern. For instance, the robot interpreted a human returning more money as a sign that the human trusted the robot.

Human-Cyborg Relations

Unfortunately, we currently find ourselves in a one-sided relationship with robots when it comes to trusting each other.

But there are good signs for the future. Many participants helped a robot — if it asked nicely — even in the face of personal risk. And some followed robots along a dangerously-meandering route through a burning building even after seeing that same robot get lost just minutes prior.

Meanwhile, humans had to actively earn the trust of their more objective, soulless robotic moneylenders — suggesting that we may have to work hard to earn our place among our future robotic overlords.

READ MORE: A conceptual framework for modeling human-robot trust [Tech Xplore]

More on robotic danger: An Amazon Warehouse Robot Sprayed 24 Workers With Bear Repellent

Keep up. Subscribe to our daily newsletter.

I understand and agree that registration on or use of this site constitutes agreement to its User Agreement and Privacy Policy
Next Article
////////////