Is that... is that a lot?

Extremely Bad Outcome

According to a new survey that took into account the input from 2,778 researchers, there's a not insignificant risk of artificial intelligence triggering human extinction.

Just over half of the AI researchers surveyed say there's a five percent chance of humans will be driven to extinction, among other "extremely bad outcomes."

The average respondent, for instance, estimated a 10 percent chance that machines could outperform humans in "every possible task" by 2027 — and 50 percent chance they'd do so by 2047.

But it's not all doom and gloom: 68.3 percent of respondents said that "good outcomes from superhuman AI" are more likely than bad ones.

Most of all, the survey highlights the sheer amount of disagreement and uncertainty among researchers, with broad disagreement about whether progress should be sped up or slowed down.

Numbers Game

The five percent figure is nonetheless telling, noting a significant perceived danger.

"It’s an important signal that most AI researchers don’t find it strongly implausible that advanced AI destroys humanity," author Katja Grace at the Machine Intelligence Research Institute in California, told New Scientist. "I think this general belief in a non-minuscule risk is much more telling than the exact percentage risk."

As the survey notes, "forecasting is difficult in general, and subject-matter experts have been observed to perform poorly."

"Our participants’ expertise is in AI, and they do not, to our knowledge, have any unusual skill at forecasting in general," the paper continues.

Educated Guesses

But that doesn't mean their word should be discredited.

"While unreliable, educated guesses are what we must all rely on, and theirs are informed by expertise in the relevant field," the researchers write. "These forecasts should be part of a broader set of evidence from sources such as trends in computer hardware, advancements in AI capabilities, economic analyses, and insights from forecasting experts."

In the short term, instead of expecting a dystopian extinction event triggered by a malicious AI, the vast majority of AI researchers surveyed warned about deepfakes, manipulation of public opinion, the creation of dangerous viruses, or AI systems that allow individuals to prosper at the expense of others.

And given the upcoming US presidential election, all eyes will be on AI and its unnerving capability to distort the truth in a believable way.

Updated to correct a statistical error in the third paragraph.

More on AI: Police Say AI-Generated Article About Local Murder Is "Entirely" Made Up


Share This Article