Ethical Black Box

Scientists Alan Winfield, professor of robot ethics at the University of the West of England in Bristol, and Marina Jirotka, professor of human-centered computing at Oxford University, believe robots should be fitted with an “ethical black box.” This would be the ethics equivalent of the aviation safety measure of the same name, designed to track a pilot's decisions and enable investigators to follow those actions in the event of accidents. As robots leave the controlled settings of factories and laboratories to interact more with humans, safety measures of this nature will become increasingly important.

Winfield and Jirotka argue that robotics firms should emulate the example provided by the aviation industry, which owes its safety record not just to technology and design, but also to stringent safety protocols and accident investigation. That industry introduced both black boxes and cockpit voice recorders to ensure accident investigators would be able to determine both causes of crashes and obtain critical lessons in prevention and safety.

“Serious accidents will need investigating, but what do you do if an accident investigator turns up and discovers there is no internal datalog, no record of what the robot was doing at the time of the accident? It’ll be more or less impossible to tell what happened,” Winfield said to The Guardian. Applied in the context of robotics, an ethical black box would record all decisions, its bases for decision-making, movements, and sensory data for its robot host. The data provided by the black box could also assist robots in explaining their actions in language human users can understand, fostering better relationships and improving the user experience.

Click to View Full Infographic

Managing Ethics Of AI

Winfield and Jirotka are not the only experts concerned about managing the ethics of artificial intelligence (AI). Missy Cummings, who is a drone specialist and the director of the Human and Autonomy Lab at Duke University in North Carolina, told the BBC in March that oversight of AI is a major problem for which there is not yet a solution: “Presently, we have no commonly accepted approaches,” says Cummings. “And without an industry standard for testing such systems, it is difficult for these technologies to be widely implemented.”

In September of 2016, Amazon, Facebook, Google, IBM, and Microsoft, formed the Partnership on Artificial Intelligence to Benefit People and Society. The coalition is focused on ensuring AI is deployed in ways that are ethical, fair, and inclusive. They were joined by Apple in January, and since that time, many other tech companies have joined the partnership as well.

Meanwhile, outreach and charity organization the Future of Life Institute (FLI) has created the Asilomar AI Principles, a basic set of laws and ethics for robotics designed to ensure that AI remains beneficial to the future of humankind. FLI was founded by experts from DeepMind and MIT, and its Scientific Advisory Board includes Stephen Hawking, Frank Wilczek, Elon Musk, Nick Boström — and even Morgan Freeman.

That being said, if proactive thought combined with hard work from the sharpest minds in the industry is the best defense against any future problems with AI, we're already in good shape.


Share This Article