Primarily by preventing a "winner takes all" scenario for whomever figures it out.
Race To The Bottom
As tech corporations like Facebook and Google, as well as government research firms like the Pentagon's DARPA or Russia's Advanced Research Foundation try to create super-powered artificial general intelligence, precautions must be taken to prevent that super-intelligent AI from crushing humanity.
That's the crux of a new essay in The Conversation penned by Wim Naudé, a professorial fellow at the United Nations University. Naudé argues that the government needs to step in, both to make sure artificial general intelligence benefits the public at large — and to make sure nobody unleashes a malicious algorithm on humanity.
In the essay, Naudé cites his research from April, in which he investigated which governmental policies may be most beneficial for a society faced with superhuman AI.
He also has some policy suggestions. First, governments should make a standing offer to buy imperfect-but-still-powerful algorithms. Naudé believes that by creating a second-place prize, competing firms might be more likely to collaborate and share their knowledge instead of hiding it behind locked doors.
But that would only prevent the wealth generated by this AI from concentrating at the top. If the algorithm itself turns out to be malicious, Naudé suggests that governments could tax whatever company develops it according to how friendly the AI is.
"A high enough tax rate would essentially mean the nationalization of the super-AI," Naudé writes. "This would strongly discourage private firms from cutting corners for fear of losing their product to the state."
READ MORE: Singularity: how governments can halt the rise of unfriendly, unstoppable super-AI [The Conversation]
More on dangerous AI: Should Evil AI Research Be Published? Five Experts Weigh In.