Susan Schneider is a fellow at the Institute for Ethics and Emerging Technologies (IEET). She is also an associate professor of philosophy at the University of Connecticut, and her expertise includes the philosophy of cognitive science, particularly with regards to the plausibility of computational theories of mind and theoretical issues in artificial intelligence (AI).
In short, Schneider has a keen understanding of the intersection between science and philosophy. As such, she also has a unique perspective on AI, offering a fresh (but quite alarming) view on how artificial intelligence could forever alter humanity’s existence. In an article published by the IEET, she shares that perspective, talking about potential flaws in the way we view AI and suggesting a possible connection between AI and extraterrestrial life.
The bridge Schneider uses to make this connection is the idea of a “postbiological” life. In the article she explains that postbiological refers to either the eventual form of existence humanity will take or the AI-emergent lifeforms that would replace our existence altogether. In other words, it could be something like superintelligent humans enhanced through biological nanotechnology or it could be an artificially intelligent supercomputer.
Whatever form postbiological life takes, Schneider posits that the transition we’re currently experiencing is one that may have happened previously on other planets:
The technological developments we are witnessing today may have all happened before, elsewhere in the universe. The transition from biological to synthetic intelligence may be a general pattern, instantiated over and over, throughout the cosmos. The universe’s greatest intelligences may be postbiological, having grown out of civilizations that were once biological.
In light of that, Schneider asks the following: “Suppose that intelligent life out there is postbiological. What should we make of this?”
There isn’t any guarantee that we can “control” AI on Earth when it becomes superintelligent, even with multi-million-dollar efforts devoted to AI safety. “Some of the finest minds in computer science are working on this problem,” Schneider writes. “They will hopefully create safe systems, but many worry that the control problem is insurmountable.”
If artificially intelligent postbiological life exists elsewhere in our universe, it’s a major cause of concern for a number of reasons. “[Postbiological extraterrestrial life] may have goals that conflict with those of biological life, have at its disposal vastly superior intellectual abilities, and be far more durable than biological life,” Schneider argues. These lifeforms also might not place the same value on biological intelligence that we do, and they may not even be conscious in the same manner that we are.
Schneider makes the comparison between how we feel killing a chimp versus eating an apple. Both are technically living organisms, but because we have consciousness, we place a higher value on other species that have it as well. If superintelligent, postbiological extraterrestrials don’t have consciousness, can we expect them to understand us? Even more importantly, would they value us at all? Food for thought for any proponents of active SETI.