Gearing Up for SkyNet?

Artificial intelligence (AI) is currently at the forefront of cutting-edge science and technology. Advances in AI, including aggregate technologies like deep learning and artificial neural networks, are behind a massive percentage of modern developments. However, even though there is great positive potential for AI, many are afraid of what AI could do, and rightfully so. There is still the fear of a technological singularity, a circumstance in which AI machines would surpass the intelligence of humans and take over the world.

Charity and outreach organization the Future of Life Institute (FLI) recently hosted their second Beneficial AI Conference (BAI 2017). Throughout the week-long conference, AI experts developed what they call the Asilomar AI Principles, which ensures that AI remains beneficial and not harmful to the future of humankind.

The FLI, founded by experts from MIT and DeepMind, work with a Scientific Advisory Board that includes genius and theoretical physicist Stephen Hawking, Nobel laureate and mathematician Frank Wilczek (the man behind time crystals), Tesla and SpaceX CEO Elon Musk, ethical AI expert Nick Boström, and even Morgan Freeman. Currently, aside from their work keeping AI beneficial and ethical, the FLI is also exploring ways to reduce risks from nuclear weapons and biotechnology.

FLI isn't the only group that's been working on ethical guidelines for AI. There is also the Partnership on AI to Benefit People and Society, which Apple recently joined. Another is the Artificial Intelligence Fund, a partnership that looks at AI from an interdisciplinary approach.

FLI conference speakers. Photo Credit: FLI

AI for the Common Good

The Asilomar AI Principles are similar to the IEEE's AI framework guideline called Ethically Aligned Design. Both provide parameters that support the continuation of conscientious AI development. So, how does the Asilomar Principles suggest to keep a SkyNet style science fiction nightmare at bay? Well, it offers 23 principles grouped into three categories that cover research, ethics and values, and long-term issues.

The principles don't dilute the concrete realities of AI research. On the contrary, they aim to keep it rigorous and well-funded. It suggests these key points:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

On the ethical side, these principles highlight a respect for human values. "AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity," principle no. 11 states. However, the principles make clear that humans should also seriously monitor AI: "Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives" (principle no. 16). Additionally, it is noted that AI should never be used to subvert the foundations of society (principle no. 17).

As an overall and long-term principle, the group highlights that AI should always work for the common good. Their final point puts it well: "Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization" (no. 23).

 


Share This Article