It'd blow away our best defenses like children's toys.
Scientists at the Max Planck Society, a storied European research institution, say humanity will never be able to control a super-intelligent artificial intelligence that could save or destroy humanity.
That's according to research published last week in the Journal of Artificial Intelligence Research. The problem, the Max Planck scientists say, is that there's no way to contain such an algorithm without technology far more advanced than what we can build today.
The team primarily focused on the issue of restraint. If an all-powerful algorithm somehow determined that it ought to hurt people or, in a more "Terminator"-esque fashion, end humanity altogether, how would we prevent it from acting?
They propose building a sort of "containment algorithm" that simulates the dangerous algorithm's behavior and blocks it from doing anything harmful — but because the containment algorithm would need to be at least as powerful as the first to do so, the scientists declared the problem impossible to solve.
This is all a theoretical debate. AI advanced enough to menace humankind is probably still a long way away, but very smart people are working hard on it. That makes it the perfect topic to debate in advance, of course — we'd want to know the peril before it arrives.
"A super-intelligent machine that controls the world sounds like science fiction," study coauthor Manuel Cebrian, Leader of the Digital Mobilization Group at the Center for Humans and Machines of the Max Planck Institute for Human Development, said in a press release. "But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity."
READ MORE: We wouldn’t be able to control superintelligent machines [Max Planck Society]
More on AI: Should Evil AI Research Be Published? Five Experts Weigh In.