Futurism
New Gods

Scientists: It’d Be Impossible to Control Superintelligent AI

byDan Robitzski
Jan 12
Futurism

It'd blow away our best defenses like children's toys.

Runaway AI

Scientists at the Max Planck Society, a storied European research institution, say humanity will never be able to control a super-intelligent artificial intelligence that could save or destroy humanity.

That’s according to research published last week in the Journal of Artificial Intelligence Research. The problem, the Max Planck scientists say, is that there’s no way to contain such an algorithm without technology far more advanced than what we can build today.

Internal Investigation

The team primarily focused on the issue of restraint. If an all-powerful algorithm somehow determined that it ought to hurt people or, in a more “Terminator”-esque fashion, end humanity altogether, how would we prevent it from acting?

They propose building a sort of “containment algorithm” that simulates the dangerous algorithm’s behavior and blocks it from doing anything harmful — but because the containment algorithm would need to be at least as powerful as the first to do so, the scientists declared the problem impossible to solve.

Advertisement

Theoretical Argument

This is all a theoretical debate. AI advanced enough to menace humankind is probably still a long way away, but very smart people are working hard on it. That makes it the perfect topic to debate in advance, of course — we’d want to know the peril before it arrives.

“A super-intelligent machine that controls the world sounds like science fiction,” study coauthor Manuel Cebrian, Leader of the Digital Mobilization Group at the Center for Humans and Machines of the Max Planck Institute for Human Development, said in a press release. “But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”

READ MORE: We wouldn’t be able to control superintelligent machines [Max Planck Society]

More on AI: Should Evil AI Research Be Published? Five Experts Weigh In.

Advertisement


As a Futurism reader, we invite you join the Singularity Global Community, our parent company’s forum to discuss futuristic science & technology with like-minded people from all over the world. It’s free to join, sign up now!

Share This Article

Copyright ©, Singularity Education Group All Rights Reserved. See our User Agreement, Privacy Policy and Cookie Statement. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Futurism. Fonts by Typekit and Monotype.