Should the future of our society hinge on a secret? A new report suggests that, in order to ensure the safety of a society increasingly reliant upon artificial intelligence, we need to make sure that it's kept in the hands of a select few. Ultimately, though, that decision could do more harm than good.

In the report, 20 researchers from several future-focused organizations, including OpenAI, Endgame, the Electronic Frontier Foundation, Oxford's Future of Humanity Institute, and the Center for a New American Security, express the fear that, in the wrong hands, AI could cause the downfall of society. In fact, the report outlines several scenarios — like smarter phishing scams, malware epidemics, and robot assassins — that haven't happened yet, but don't seem too far from the realm of possibility.

That's why, they argue, AI's inner-workings may need to remain secret — to keep the technology out of bad guys' hands. The report suggests that a regulatory agency, or the AI research community of their own volition, could do this by considering "different openness models." These models would shift the AI research community away from the current environment of increasing transparency, where publishing algorithms or making them open source is becoming more common. Instead, the researchers recommend abstaining or delaying the publication of AI findings to restrict their dissemination to parties that would use them with less-than-admirable intentions.

We can probably all agree that it's not such a good idea to clear the path for malicious actors to use AI — a recent spate of cyber attacks across the globe certainly shows that they're certainly out there and willing to use whatever tools are available to them.

But restricting access to AI won't prevent people evil-doers from gaining access to it. In fact, it might inhibit people trying to use it for good.

First, the researchers don't say much about just how likely their Black Mirror-esque scenarios are, U.K. analyst Jon Collins pointed out in a blog on Gigaom. "We can all conjure disaster scenarios, but it is not until we apply our expertise and experience to assessing the risk, that we can prioritize and (hopefully) mitigate any risks that emerge." Collins wrote.

Plus, artificial intelligence research is already shrouded in secrecy. Companies, from Google to Microsoft to Amazon, mostly shroud their algorithms under the protective blanket of proprietary information. Today most AI researchers have been unable to replicate AI studies, which makes it hard for researchers to establish how scientifically trustworthy they are.

And there are compelling reasons to be more, rather than less, open about how AI works. Researchers have discovered that machine learning AIs — trained on human data — have echoed the biases of their creators, such as associating a person's race with more potential for criminal activity or a gender with success in a career.

Experts have suggested that the solution could, in fact, be the opposite of what the authors of this report recommend — that companies should open this "black box" at the core of AI. Letting researchers examine each other's code is the only way for AI systems to become more just.

More openness about AI algorithms could also make AI systems more secure, less susceptible to hacks or to being reverse-engineered by a terrorist. This might be accomplished by a system of checks and balances, in which an AI is tested by experts on an "Algorithm Safety Board" before being released, the way a drug is tested in clinical trials before it becomes available to the public.

As the report suggests, researchers should probably think twice about making all AI information open by default. That would  be a bit like shooting ourselves in the foot in the race against hackers and malicious actors.

But rather than hunker behind proprietary shields, companies could build stronger systems by working on them together. The report also recommends "actively seek[ing] to expand the range of stakeholders and domain experts involved in discussions of these challenges;" AI algorithms themselves could likely benefit from the same collaboration.


Share This Article