We might be heading down the same path.

BOTtle Neck

If we're alone in the universe, astrophysicist Michael Garrett says it might be because aliens shared an existential problem that we're only just beginning to reckon with: powerful AI.

The advent of an artificial superintelligence (ASI), Garrett proposes in a new paper published in the journal Acta Astronautica, could be preventing the long-term survival of alien civilizations — and perhaps impeding their evolution into space-faring, multi-planetary empires.

It's a hypothesis that might even help answer the Fermi Paradox, which asks why we still haven't detected alien civilizations when our indescribably vast universe is abundant with habitable worlds.

"Could AI be the universe's 'great filter' — a threshold so hard to overcome that it prevents most life from evolving into space-faring civilizations?" Garrett, who is the Sir Bernard Lovell chair of Astrophysics at the University of Manchester, wrote in an essay for The Conversation.

Warmongers

According to Garrett, an ASI would not only be smarter than humans, but would "enhance its own capabilities at a speed that outpaces our own evolutionary timelines without AI."

And therein lie the "enormous" risks. If AI systems gain power over military capabilities, for example, the wars they wage could destroy our entire civilization.

"In this scenario, I estimate the typical longevity of a technological civilization might be less than 100 years," Garrett wrote.

"That's roughly the time between being able to receive and broadcast signals between the stars (1960), and the estimated emergence of ASI (2040) on Earth," he added. "This is alarmingly short when set against the cosmic timescale of billions of years."

Military Mishaps

To be sure, Garrett's proposal is just one potential "great filter" answer to the Fermi Paradox. It could also simply be that the universe is far too vast — and intelligence far too rare, or the time scales too epic — for civilizations to encounter each other.

But don't let that downplay AI's risks, even if right now they still seem relatively tame. Questions abound over the legality of ingesting copyrighted materials, like books and artworks, to train generative AI models. We're also having to confront the technology's environmental impact too, as the computers that power it consume staggering amounts of water and electricity.

Those aren't quite the spectacular precursors of a dramatic AI apocalypse, but that could quickly change. To address those risks, Garrett calls for strong regulations on AI's development, especially on the technology's integration into military systems, such as how Israel is reportedly using AI to identify airstrike targets in Gaza.

"There is already evidence that humans will voluntarily relinquish significant power to increasingly capable systems," Garrett said, "because they can carry out useful tasks much more rapidly and effectively without human intervention."

"This means we already edge dangerously close to a precipice where autonomous weapons operate beyond ethical boundaries and sidestep international law."

More on aliens and AI: UK Royal Astronomer Says Alien Life Might Be Mega-Weird AI


Share This Article