Rising Up to the Task

2016 was a year of many firsts in artificial intelligence (AI). It was the year that saw major breakthroughs in AI aggregate technologies, such as computer visiondeep learning, and artificial neural networks — so much so that naysayers have been predicting how we're leading ourselves closer to an end-of-humankind-as-we-know-it event triggered by the singularity.

The warnings are exaggerated, of course, and are rooted in science fiction (SkyNet isn't coming, guys, c'mon). But it doesn't hurt to be prepared, or at least to influence the direction AI research can or should take.

A new enterprise rises to the challenge in the form of the Ethics and Governance of Artificial Intelligence Fund. Backed by eBay founder Pierre Omidyar and LinkedIn co-founder Reid Hoffman, together with the Knight Foundation, the fund's goal is "to support work around the world that advances the development of ethical AI in the public interest, with an emphasis on applied research and eduction."

At its launch last January 10, the fund already received an initial investment of $27 million — with Hoffman and Omidyar each committing $10 million through their respective foundations, and the Knight Foundation's $5 million contribution. Other preliminary investors include the William and Flora Hewlett Foundation, and Raptor Group founder Jim Pallotta, each adding $1 million to the fund.

The AI fund will be housed at The Miami Foundation, with the MIT Media Lab and Harvard's Berkman Klein Center as the anchor institutions.

 

AI Changing the World

With the government being slow on the uptake – despite position papers by the White House itself, and a senate hearing on the subject – it's a good sign that private companies and institutions are taking on AI. The IEEE even released what can be considered the first 'rulebook' for ethical AI systems.

Hence the AI fund. "Because of this pervasive but often concealed impact, it is imperative that AI research and development be shaped by a broad range of voices — not only by engineers and corporations, but also by social scientists, ethicists, philosophers, faith leaders, economists, lawyers, and policymakers,” the group writes.

This impact of AI is seen in those instances when policy fails to effectively inform research, as in the case of Uber's San Francisco autonomous test run. As such, the initiative seeks to support activities that promote keeping human issues at the forefront of AI research and maximizing the benefits of AI. Specifically, it addresses the following:

  • Communicating complexity: How do we best communicate, through words and processes, the nuances of a complex field like AI?
  • Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?
  • Advancing accountable and fair AI: What kinds of controls do we need to minimize AI’s potential harm to society and maximize its benefits?
  • Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?
  • Expanding the table: How do we grow the field to ensure that a range of constituencies are involved with building the tools and analyzing social impact?

“There’s an urgency to ensure that AI benefits society and minimizes harm," Hoffman explained. The AI Fund certainly isn't the first partnership that aims to establish guideposts for AI research. There's the Partnership on AI, which Google and Microsoft are a part of. Then there's also the OpenAI collaboration between Elon Musk and Microsoft.

As Jonathan Zittrain, co-founder of the Berkman Klein Center, said: "A lot of our work in this area will be to identify and cultivate technologies and practices that promote human autonomy and dignity rather than diminish it."


Share This Article