A rhetorical question for you. Let’s say you’re an AI scientist, and you’ve found the holy grail of your field — you figured out how to build an artificial general intelligence (AGI). That's a truly intelligent computer that could pass as human in terms of cognitive ability or emotional intelligence. AGI would be creative and find links between disparate ideas — things no computer can do today.

That's great, right? Except for one big catch: your AGI system is evil or could only be used for malicious purposes.

So, now a conundrum. Do you publish your white paper and tell the world exactly how to create this unrelenting force of evil? Do you file a patent so that no one else (except for you) could bring such an algorithm into existence? Or do you sit on your research, protecting the world from your creation but also passing up on the astronomical paycheck that would surely arrive in the wake of such a discovery?

Yes, this is a rhetorical question — for now. But some top names in the world of AI are already thinking about their answers. On Friday, speakers at the “AI Race and Societal Impacts” panel of The Joint Multi-Conference on Human-Level Artificial Intelligence, which was organized by GoodAI, in Prague gave their best responses after the question was posed by an audience member.

Here’s how five panelists, all experts on the future of AI, responded.

Hava Siegelmann, Program Manager at DARPA

Siegelmann urged the hypothetical scientist to publish their work immediately. Siegelmann had earlier told Futurism that she believes there is no evil technology, but there are people who would misuse it. If that AGI algorithm was shared with the world, people might be able to find ways to use it for good.

But after Siegelmann answered, the audience member who posed the hypothetical question clarified that, for the purposes of the thought experiment, we should assume that no good could ever possibly come from the AGI.

Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics, UNICRI, United Nations

Easy one: “Don’t publish it!”

Beridze otherwise stayed out of the fray for this specific question, but throughout the conference he highlighted the importance of setting up strong ethical benchmarks on how to develop and deploy AGI. Apparently, deliberately releasing an evil super-intelligent entity into the world would go against those standards.

Alexey Turchin, author and finalist in GoodAI's "Solving the AI Race" challenge

Turchin believes there are responsible ways to handle such an AI system. Think about a grenade, he said — one should not hand it to a small child, but maybe a trained soldier could be trusted with it.

But Turchin's example is more revealing than it may initially appear. A hand grenade is a weapon created explicitly to cause death and destruction no matter who pulls the pin, so it’s difficult to imagine a so-called responsible way to use one. It's not clear whether Turchin intended his example to be interpreted this way, but he urged the AI community to make sure dangerous algorithms were left only in the most trustworthy hands.

Tak Lo, a partner at Zeroth.ai, an accelerator that invests in AI startups

Lo said the hypothetical computer scientist should sell the evil AGI to him. That way, they wouldn't have to hold onto the ethical burden of such a powerful and scary AI — instead, you could just pass it to Lo and he would take it from there. Lo was likely (at least half-)kidding, and the audience laughed. Earlier that day, Lo said that private capital and investors should be used to push AI forward, and he may have been poking fun at his own obviously capitalistic stance. Still, someone out there would absolutely try to buy such an AGI system, should it arrive.

But what Lo suggests, in jest or not, is one of the most likely results, should this actually come to pass. While hobbyists can develop truly valuable and innovative algorithms, much of the top talent in the AI field is scooped up by large companies who then own the products of their labor. The other likely scenario is that the scientist would publish their paper on an open-access preprint server like arXiv to help promote transparency.

Seán Ó hÉigeartaigh, Executive Director of the Cambridge Center for the Study of Existential Risk

Ó hÉigeartaigh agreed with Beridze: you shouldn't publish it. “You don’t just share that with the world! You have to think about the kind of impact you will have,” he said.

And with that, the panel ended. Everyone went on their merry way, content that this evil AGI was safe in the realm of the hypothetical.

In the "real world," though, ethics often end up taking a back seat to more earthly concerns like money and prestige. Companies like Facebook, Google, and Amazon regularly publish facial recognition or other surveillance systems, often selling them to police or the military which uses it to monitor everyday people. Academic scientists are trapped in the “publish or perish,” cycle — publish a study, or risk losing your position. So ethical concerns are often relegated to a paper's conclusion, as a factor for someone else to sort out at some vague point in the future.

For now, though, it's unlikely that anyone will come up with AGI — much less evil AGI — anytime soon. But the panelists' wide-ranging answers means that we are still far from sorting out what should be done with unethical, dangerous science.

More about evil AI thought experiments: Grimes, Elon Musk, and the Supposedly Trauma-Inducing A.I. Theory That Brought Them Together


Share This Article