AI Police

Elon Musk, CEO of SpaceX and Tesla Motors, has not been shy about sharing his fears on the progress of artificial intelligence (AI). While directly contributing to developing AI, he advocates being aware of the risks true AI can bring, and the capacity of nefarious individuals to misuse it.

Which is why one startup he backs is looking to “police” the development of AI. OpenAI, a non-profit AI research company that will be openly sharing its research, is calling for researchers who will join its operations to develop AI and detect the research progress of other tech giants.

Starting Points

The company is looking at actively tracking negative developments in AI, or AI falling into the wrong hands. While the company is yet to have a great idea, according to Ian Goodfellow who joins OpenAI from Google, it is starting where some AI is already being implemented.

Of particular note is AI implementation in financial markets and online news. Goodfellow says that AI could be exploited by financial firms, where it is used to study the market. “They could make a few trades designed to fool their competitors into dumping a stock at a lower price than its true value,” he tells Wired.

Meanwhile, the company is also looking at AI algorithms used in social media that affect online news. Facebook previously revealed using secret algorithms that affect articles seen in the News Feed, sparking a debate on how this affects public opinion.

“Studying systems like that—systems that already exist—is a good starting point.” says Greg Brockman, who oversees OpenAI

Actually, OpenAI in itself was founded to mitigate or even prevent the possibility of rogue AI. The company actively researches artificial intelligence then makes it publicly available. This would allow more platforms to host AI, and prevent one or two giants monopolizing these developments.


Share This Article