In BriefPhysicist Max Tegmark is optimistic about the future of artificial intelligence and its limitless potential. However, he believes people have a limited view of what AI truly is, and that there isn't enough being done to ensure we're safe from it.
While there are people on both sides of the conversation regarding artificial intelligence and its impact on the modern world, there is one person who is unequivocally enthusiastic about it: physicist Max Tegmark.
Tegmark, a professor of physics at the Massachusetts Institute of Technology (MIT), see the development and utilization of AI as the next step in not just our lives, but life in general. His new book, Life 3.0: Being Human in the Age of Artificial Intelligence explores how AI will change aspects of war, poverty, law, society as a whole, and more.
Speaking with The Verge, Tegmark elaborated on what “Life 3.0” means, explaining how he views life to be “any process that can retain its complexity and reproduce.” Life 1.0 was bacteria; Life 2.0: Humanity. Life 3.0, however, is life without evolutionary limitations; a form of life that’s capable of shaping and improving its software (mind) and hardware (body). Or: artificial intelligence.
“Put another way, if we create AI which is at least as smart as us, then it can not only design its own software to make itself learn new things, but there’s always an attempt to swap up upgraded memory to remember a million times more stuff, or get more computing power,” said Tegmark.
What are the Limitations?
Before we can get to a future enhanced by AI, people’s perceptions about it have to change, specifically what they think it is and what it can do. When most people think about intelligence, they’re probably thinking about an organic creature’s intelligence, like that of a dog or a housefly; when they think AI, they’re imagining Skynet from the Terminator franchise, or some other version of an AI working against humanity.
In reality, AI can become so much more. Its potential, as well as our own intelligence, is almost limitless. It’s encouraging to know there’s nothing in the laws of physics that implies it’s impossible to design something smarter than ourselves, or that intelligence is exclusive to those with flesh and blood.
“I had a lot of fun in the book thinking about what are the ultimate limits of the laws of physics on how smart you can be,” said Tegmark. “The short answer is that it’s sky-high, millions and millions and millions of times above where we are now. We ain’t seen nothing yet. There’s a huge potential for our universe to wake up much more, which I think is an inspiring thought, coming from a cosmology background.”
Living With AI
It’s inevitable that AI will become more advanced and have a greater influence on our lives. Look no further than the work being done by companies like Apple and Microsoft, both of which already have AI in the forms of digital assistants like Siri and Cortana. In June, Apple unveiled a new AI system known as HomePod, which is meant to change how we interact with our homes. Microsoft, meanwhile, hopes to make AI more human by improving their ability to have more natural conversations.
Conversely, there are those who have expressed concerns and warnings about AI’s impact on the world. SpaceX CEO Elon Musk called AI one of the biggest threats to civilization, urging governors to put regulations into place. While a few spoke out against the CEO’s claims, saying his focus is on worst case scenarios, he has continued to push for regulations. Earlier in August, Musk, along with 115 other AI experts, called on the United Nations to act before it’s too late.
Tegmark notes that conversations about AI safety are more than lacking, especially from the government, which has invested billions of dollars into making more powerful AI.
“No governments of the world have said that AI safety research should be an integral part of their computer science funding,” he said. “It’s like, why would you fund building nuclear reactors without funding nuclear reactor safety? Now we’re funding AI research with no budget in sight for AI safety.”
Musk isn’t the only way taking steps to ensure we’re safe, though. Google’s AI Fight Club was created to train AI systems to better prevent cyberattacks, and the Partnership on AI, formed in 2016, continues to develop standards and ethics around the development of AI.
Now more than ever, there need to be real conversations about artificial intelligence that focus on its benefits and risks. AI is going to change the world more than it already has, and we need to be prepared before it’s too late.