"The only way to win this game is not to play it."

Foregone Conclusion

In a survey earlier this year, just over half of the 2,778 researchers polled said that there's a five percent chance that humans will be driven to extinction, among other "extremely bad outcomes."

At the same time, 68.3 percent of respondents said that "good outcomes from superhuman AI" are more likely than bad ones, showing there's little consensus on the topic among experts.

Some are extraordinarily negative. Take AI researcher and University of Louisville computer science lecturer Roman Yampolskiy, who's squarely in the doomer camp. In a recent episode of Lex Fridman's podcast, he predicted that there's — get this — a 99.9 percent chance AI could wipe out humanity within the next 100 years.

"If we create general superintelligences, I don't see a good outcome long term for humanity," Yampolskiy told Fridman. "The only way to win this game is not to play it."

AI Doomerism

It's a more alarming take on the perceived risks of developing AI technologies, with Yampolskiy pointing to the chaos existing large language models have already caused.

"They already have made mistakes," he said. "We had accidents, they've been jailbroken. I don't think there is a single large language model today, which no one was successful at making do something developers didn't intend it to do."

To Yampolskiy, it's a threat we can't even imagine yet.

"Superintelligence will come up with something completely new, completely super," he told Fridman. "We may not even recognize that as a possible path to achieve" the goal of ending everyone.

The chances of AI doing just that may not reach 100 percent — but it could get pretty close, Yampolskiy argued.

"We can put more resources exponentially and get closer but we never get to 100 percent," he said. "If a system makes a billion decisions a second and you use it for 100 years, you're still going to deal with a problem."

The topic has already drawn a number of high-profile members of the AI community to come up with their own predictions. Meta's chief AI scientist Yann LeCun and a so-called "godfather" of the techGoogle's head of AI in the United Kingdom Demis Hassabis, and ex-Google CEO Eric Schmidt have all warned that AI tech could pose an existential risk.

But as the survey earlier this year demonstrates, top minds are far from a consensus. The science of forecasting where our current obsession with AI will lead us in the distant future is also still in its infancy.

In short, we should take Yampolskiy's comments with a healthy grain of salt. Our days as a species on Earth aren't numbered just yet. Besides, our planet has plenty of other existential threats to reckon with.

More on AI: State Department Report Warns of AI Apocalypse, Suggests Limiting Compute Power Allowed for Training


Share This Article