AI Ethics?

Contrary to fictional portrayals of humans being herded like animals by Artificially Intelligent (AI) machines, realistic concerns about artificial intelligence are far more benign. Still, they are very important, and conversations have to begin in order to look at perhaps setting some ground rules.

That's why the big players are stepping up. Researchers and scientists from large tech companies Google, Amazon, Microsoft, IBM, and Facebook have been meeting and discussing the future implications of AI for humans.

While no hard details on the group's policies, objectives, or even it's name have come out, insiders have stated the group's intentions: to ensure that A.I. research is focused on benefiting people, not hurting them. That's according to the New York Times.

Self-Policing

While no evil Skynet-esque boogeymen have revealed themselves, the increasing pace of AI technology has far outrun regulation or even ethics discussions on the topic. IBM's Watson has already demonstrated its ability to make movie trailers and beat humans at their own game. AI is the brains behind all self-driving tech. Militaries all over the world are increasingly being automatized.

A report published by Stanford impresses the need for industry efforts such as this team-up, in order to solve potential AI problems in the future. The aim of the Stanford group is to check in and report on the status of AI every 5 years for the next century. These meetings and discussions could form the framework for self-policing cooperation.

Such organization could be key to working out the problems of AI, since each company is potentially tugging at a different string when it comes to the technology. Bringing in government regulators now would be ineffective at best, and could stifle innovation at its worst.

“We’re not saying that there should be no regulation,” said Peter Stone, one of the authors of the Stanford report, to the New York Times. “We’re saying that there is a right way and a wrong way.”


Share This Article