AI and Ethics

Are we right to fear the AI of Westworld? Credit: HBO

Shows like Terminator and Westworld contribute to an attitude that is, more or less, unfriendly towards the development of artificial intelligence (AI). News about automation upending jobs in the near future also paints a not-so-favorable image of autonomous systems (AS).

Do these fears have legitimate basis? Perhaps, but whether or not they do, the Institute of Electrical and Electronics Engineers (IEEE) is convinced that AI/AS development should be "aligned to humans in terms of our moral values and ethical principles."

They present their ideas in a 136-paged framework document called Ethically Aligned Design, which the institute hopes will guide the AI/AS industry to build benevolent and beneficial AI and AS. The framework is based on the input of more than 100 thought leaders in the fields of AI, law and ethics, philosophy, and policy. These experts have backgrounds in the academe and science, government and corporate sectors.

"By providing technologists with peer-driven, practical recommendations for creating ethically aligned autonomous and intelligent products, services, and systems, we can move beyond the fears associated with these technologies and bring valued benefits to humanity today and for the future,” according to a statement by Konstantinos Karachalios, IEEE Standard Association managing director.

Credits: IEEE

Sound Policies, Good Research

IEEE opened the discussion to suggestions from people in the industry, engineers and developers alike. A feedback mechanism is also in place, making this a dialogue-based policy-making platform.

The proposals made in the framework include considerations for methodologies to guide ethical research and design; and warnings about implementing black-box services and components with utmost caution and ethical care.

Of course, what IEEE wishes to achieve raises several questions – most notably one that we've covered previously: can morality be programmed? Is it actually possible for AI/AS technologists to align their creations "with the values of its users and society," as the IEEE document prescribes?

To this end, the IEEE says standards must be set up to make sure users are not harmed by autonomous outcomes, by providing “oversight of the manufacturing process of intelligent and autonomous technologies.” It warns that, as AI systems become more sophisticated, "unanticipated or unintended behavior becomes increasingly dangerous."

As such, "[r]esearchers and developers will confront a progressively more complex set of ethical and technical safety issues in the development and deployment of increasingly autonomous and capable AI systems.”

Overall, the IEEE has definitely taken a much needed first step in humanizing AI/AS systems.

Share This Article