MIT professor and AI researcher Max Tegmark is pretty stressed out about the potential impact of artificial general intelligence (AGI) on human society. In a new essay for Time, he rings the alarm bells, painting a pretty dire picture of a future determined by an AI that can outsmart us.

"Sadly, I now feel that we're living the movie 'Don't Look Up' for another existential threat: unaligned superintelligence," Tegmark wrote, comparing what he perceives to be a lackadaisical response to a growing AGI threat to director Adam McKay's popular climate change satire.

For those who haven't seen it, "Don't Look Up" is a fictional story about a team of astronomers who, after discovering that a species-destroying asteroid is hurtling towards Earth, set out to warn the rest of human society. But to their surprise and frustration, a massive chunk of humanity doesn't care.

The asteroid is one big metaphor for climate change. But Tegmark thinks that the story can apply to the risk of AGI as well.

"A recent survey showed that half of AI researchers give AI at least ten percent chance of causing human extinction," the researcher continued.  "Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence."

"Think again," he added, "instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it's deserving of an Oscar."

In short, according to Tegmark, AGI is a very real threat, and human society isn't doing nearly enough to stop it — or, at the very least, isn't ensuring that AGI will be properly aligned with human values and safety.

And just like in McKay's film, humanity has two choices: begin to make serious moves to counter the threat — or, if things go the way of the film, watch our species perish.

Tegmark's claim is pretty provocative, especially considering that a lot of experts out there either don't agree that AGI will ever actually materialize, or argue that it'll take a very long time to get there, if ever. Tegmark does address this disconnect in his essay, although his argument arguably isn't the most convincing.

"I'm often told that AGI and superintelligence won't happen because it’s impossible: human-level Intelligence is something mysterious that can only exist in brains," Tegmark writes. "Such carbon chauvinism ignores a core insight from the AI revolution: that intelligence is all about information processing, and it doesn’t matter whether the information is processed by carbon atoms in brains or by silicon atoms in computers."

Tegmark goes as far as to claim that superintelligence "isn't a long-term issue," but is even "more short-term than e.g. climate change and most people's retirement planning." To support his theory, the researcher pointed to a recent Microsoft study arguing that OpenAI's large language model GPT-4 is already showing "sparks" of AGI and a recent talk given by deep learning researcher Yoshua Bengio.

While the Microsoft study isn't peer-reviewed and arguably reads more like marketing material, Bengio's warning is much more compelling. His call to action is much more grounded in what we don't know about the machine learning programs that already exist, as opposed to making big claims about tech that does not yet exist.

To that end, the current crop of less sophisticated AIs already poses a threat, from misinformation-spreading synthetic content to the threat of AI-powered weaponry.

And the industry at large, as Tegmark further notes, hasn't exactly done an amazing job so far at ensuring a slow and safe development, arguing that we shouldn't have taught it how to code, connect it to the internet, or give it a public API.

Ultimately, if and when AGI might come to fruition is still unclear.

While there's certainly a financial incentive for the field to keep moving quickly, a lot of experts agree that we should slow down the development of more advanced AIs, regardless of whether AGI is around the corner or still lightyears away.

And in the meantime, Tegmark argues that we should agree there's a very real threat in front of us before it's too late.

"Although humanity is racing toward a cliff, we're not there yet, and there's still time for us to slow down, change course and avoid falling off – and instead enjoying the amazing benefits that safe, aligned AI has to offer," Tegmark writes. "This requires agreeing that the cliff actually exists and falling off of it benefits nobody."

"Just look up!" he added.

More on AI: Elon Musk Says He's Building a "Maximum Truth-Seeking AI"

Share This Article