In Brief
  • In a video posted by Big Think, AI expert Michael Vassar shares his belief that artificial super-intelligence will wipe out humanity if we do not approach AI with significant caution.
  • Vassar warns that we must find ways to quickly promote analytically sound discoveries from those who lack academic prestige if we want to stay one step ahead of AI.

Hearing All Voices

In 2012, Michael Vassar became the chief science officer of MetaMed Research, which he co-founded, and prior to that, he served as the president of the Machine Intelligence Research Institute. Clearly, he knows a thing or two about artificial intelligence (AI), and now, he has come out with a stark warning for humanity when it comes to the development of artificial super-intelligence.

In a video posted by Big Think, Vassar states, “If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order.” Essentially, he is warning that an unchecked AI could eradicate humanity in the future.

Vassar’s views are based on the writings of Nick Bostrom, most specifically, those found in his book “Superintelligence.” Bostrom’s ideas have been around for decades, but they are only now gaining traction given his association with prestigious institutions. Vassar sees this lack of early attention, and not AI itself, as the biggest threat to humanity. He argues that we need to find a way to promote “analytically sound” discoveries from those who lack the prestige currently necessary for ideas to be heard.

The Threat of AI

Types of AI
CLICK TO VIEW FULL INFOGRAPHIC

Many tech giants have spoken extensively about their fears regarding the development of AI. Elon Musk believes that an AI attack on the internet is “only a matter of time.” Meanwhile, Stephen Hawking cites the creation of AI as “the best or worst thing to happen to humanity.”

Bryan Johnson’s company Kernal is currently working on a neuroprosthesis that can mimic, repair, and improve human cognition. If it comes to fruition, that tech could be a solid defense against the worst case scenario of AI going completely rogue. If we are able to upgrade our brains to a level equal to that expected of AI, we may be able to at least stay on par with the machines.

Dark Energy