“Fixed to This World”
Lord Martin Rees, Astronomer Royal and University of Cambridge Emeritus Professor of Cosmology and Astrophysics, believes that machines could surpass humans within a few hundred years, ushering in eons of domination. He also cautions that while we will certainly discover more about the origins of biological life in the coming decades, we should recognize that alien intelligence may be electronic.
“Just because there’s life elsewhere doesn’t mean that there is intelligent life,” Lord Rees told The Conversation. “My guess is that if we do detect an alien intelligence, it will be nothing like us. It will be some sort of electronic entity.”
Rees thinks that there is a serious risk of a major setback of global proportions happening during this century, citing misuse of technology, bioterrorism, population growth, and increasing connectivity as problems that render humans more vulnerable now than we have ever been before. While we may be most at risk because of human activities, the ability of machines to outlast us may be a decisive factor in how life in the universe unfolds.
“If we look into the future, then it’s quite likely that within a few centuries, machines will have taken over—and they will then have billions of years ahead of them,” he explains. “In other words, the period of time occupied by organic intelligence is just a thin sliver between early life and the long era of the machines.”
In contrast to the delicate, specific needs of human life, electronic intelligent life is well-suited to space travel and equipped to outlast many global threats that could exterminate humans.
“[We] are likely to be fixed to this world. We will be able to look deeper and deeper into space, but traveling to worlds beyond our solar system will be a post-human enterprise,” predicts Rees. “The journey times are just too great for mortal minds and bodies. If you’re immortal, however, these distances become far less daunting. That journey will be made by robots, not us.”
Surviving Our Progress
Rees isn’t alone in his ideas. Several notable thinkers, such as Stephen Hawking, agree that artificial intelligences (AI) have the potential to wipe out human civilization. Others, such as Subbarao Kambhampati, the president of the Association for the Advancement of Artificial Intelligence, see malicious hacking of AI as the greatest threat we face. However, there are at least as many who disagree with these ideas, with even Hawking noting the potential benefits of AI.
As we train and educate AIs, shaping them in our own image, we imbue them with the ability to form emotional attachments that could deter them from wanting to hurt us. There is evidence that the Singularity might not be a single moment in time, but is instead a gradual process that is already happening—meaning that we are already adapting alongside AI.
But what if Rees is correct and humans are on track to self-annihilate? If we wipe ourselves out and AI is advanced enough to survive without us, then his predictions about biological life being a relative blip on the historical landscape and electronic intelligent life going on to master the universe will have been correct—but not because AI has turned on humans.
Ultimately, the idea of electronic life being uniquely well-suited to survive and thrive throughout the universe isn’t that far-fetched. The question is, will we survive alongside it?