In Brief
DeepMind has announced the creation of new group focused on the moral and ethical implications of artificial intelligence. The goal is to answer important questions about the effect the technology might have on the way we live.

Ethics and Society

Artificial intelligence (AI) is expected to have a monumental impact on society. As such, DeepMind, an AI research company now housed under Google parent company Alphabet, has established a new unit dedicated to answering questions about the effect the technology might have on the way we live.

DeepMind Ethics and Society will bring together employees from the company and outsiders who are uniquely equipped to offer useful perspectives. Economist and former UN advisor Jeffrey Sachs, University of Oxford AI professor Nick Bostrom, and climate change campaigner Christiana Figueres are among the advisers selected for the group.

At present, the unit comprises around eight DeepMind employees and six unpaid fellows from outside the company. The number of internal staff on the committee will grow to 25 over the next year.

The leaders of the group, Verity Harding and Sean Legassick, described the external contributors as “a respected group of independent thinkers” in a blog post announcing the initiative. “These Fellows are important not only for the expertise that they bring but for the diversity of thought they represent,” read the statement.

Let’s Be Careful

DeepMind has made no secret of their ambition to integrate AI into all aspects of life. This potential pervasiveness is one reason the moral and ethical considerations of the technology must be taken very seriously.

Our Robot Overlords: Top 10 Books Highlighting the Rise of AI
Click to View Full Infographic

The company has already demonstrated that AIs can display behavior that might be described as a “killer instinct,” and some are already looking at ways to weaponize the technology, so how we choose to regulate AI could literally be a matter of life or dead.

Still, the weaponization of AI is just one of the many ethical issues being raised, and in the past, DeepMind has been criticized for falling short of what many would consider to be proper standards. In May 2016, the company came under fire after they were given access to confidential health data for 1.6 million people during the development of an app called Streams.

How we choose to answer questions about what sort of responsibility engineers have with regards to how their work might be used will have far-reaching implications. That is why bodies like DeepMind Ethics and Society are so important. Without oversight, technologists might focus on what’s possible, rather than what’s morally acceptable, and that line of thinking can cause massive problems if left unchecked.