Skype co-founder Jaan Tallinn is on a mission to ensure an artificial intelligence doesn’t destroy humanity.
According to a fascinating Popular Science story, the programmer discovered AI researcher Eliezer Yudkowsky’s essay “Staring into the Singularity” in 2007, two years after cashing in his Skype shares following the startup’s sale to eBay.
Since then, he’s dedicated more than $1 million toward preventing super-smart AIs from replacing humans as Earth’s dominate species — or simply destroying humanity altogether.
Based on PopSci‘s reporting, Tallinn isn’t afraid to put big money behind organizations attempting to prevent an AI takeover.
So far, he’s given more than $600,000 to the Machine Intelligence Research Institute, the nonprofit where Yudkowsky is a research fellow. He’s also given $310,000 to the University of Oxford’s Future of Humanity Institute, which PopSci quotes him as calling “the most interesting place in the universe.”
He’s even invested $200,000 to outright co-found his own institution, the Cambridge Centre for the Study of Existential Risk.
Preventing an AI takeover won’t be easy — we don’t only have to worry about deliberately nefarious uses of AI, but also the tech’s unintended consequences.
Still, Tallinn insists that we have no choice but to try to push AI in a direction that benefits humanity — without leaving open the option to destroy it.
“We have to think a few steps ahead,” he told PopSci. “Creating an AI that doesn’t share our interests would be a horrible mistake.”
READ MORE: Can AI escape our control and destroy us? [Popular Science]