In the wrong hands, superhuman AI "could kill everyone."

Superhuman AI

Researchers from Oxford University have warned UK lawmakers that "superhuman AI" could end up being at least as dangerous as nuclear weapons and should therefore be regulated like them, The Telegraph reports.

The experts told MPs at the UK government's Science and Technology Select Committee about the dangers of unregulated artificial intelligence technologies — and they didn't exactly beat around the bush.

"With superhuman AI there is a particular risk that is of a different sort of class, which is, well, it could kill everyone," doctoral student Michael Cohen told MPs, as quoted by The Telegraph.

AI Apocalypse

These risks are indeed pressing, with political powers increasingly trying to one-up each other in the field of AI.

Michael Osborne, a professor of machine learning at the University of Oxford who also attended the committee meeting, warned of a "massive AI arms race" between the US and China, who are willing "to throw safety and caution out the window and race as fast as possible to the most advanced AI."

"There are some reasons for hope in that we have been pretty good at regulating the use of nuclear weapons," Osborne said. "If we were able to gain an understanding that advanced AI is as comparable a danger as nuclear weapons, then perhaps we could arrive at similar frameworks for governing it."

Treat Yourself

Training AIs to achieve a milestone or reap a reward could be particularly dangerous, Cohen said.

"If you imagine training a dog with treats it will learn to pick actions that lead to it getting treats," Cohen told lawmakers, "but if the dog finds the treat cupboard it can get the treats itself without doing what we wanted it to do."

In other words, a superhuman AI runs the risk of directing "as much energy as it could to securing its hold" on a reward, which "would leave us without any energy for ourselves."

As these technologies progress, we need to have the ability to "pull the plug" if they were to ever become "much smarter than us across every domain," Cohen argued.

And we're only scratching the surface. Cohen and Osborne predicted that AIs more capable than humans could emerge as soon as the end of the century, something that could be prevented with the right regulation.

READ MORE: Advanced AI 'could kill everyone’, warn Oxford researchers [The Telegraph]

More on AI: ChatGPT Shamelessly Writes Letter Announcing Layoffs While Promoting Execs and Quoting MLK


Share This Article