"The Machines Will Win"

Late Friday night, Elon Musk tweeted a photo reigniting the debate over AI safety. The tongue-in-cheek post contained a picture of a gambling addiction ad stating "In the end the machines will win," — not so obviously referring to gambling machines. On a more serious note, Musk said that the danger AI poses is more of a risk than the threat posed by North Korea.

In an accompanying tweet, Musk elaborated on the need for regulation in the development of artificially intelligent systems. This echoes his remarks earlier this month when he said, “AI just something that I think anything that represents a risk to the public deserves at least insight from the government because one of the mandates of the government is the public well-being.”

From scanning the comments on the tweets, it seems that most people agree with Musk's assessment — to varying degrees of snark. One user, Daniel Pedraza, expressed a need for adaptability in any regulatory efforts. "[We] need a framework that's adaptable - no single fixed set of rules, laws, or principles that will be good for governing AI. [The] field is changing and adapting continually and any fixed set of rules that are incorporated risk being ineffective quite quickly."

Many experts are leery of developing AI too quickly. The possible threats it could pose may sound like science fiction, but they could ultimately prove to be valid concerns.

Hold the Skynet

Experts like Stephen Hawking have long warned about the potential for AI to destroy humanity. In a 2014 interview, the renowned physicist stated that "The development of artificial intelligence could spell the end of the human race." Even more, he sees the proliferation of automation as a detrimental force to the middle class. Another expert, Michael Vassar, chief science officer of MetaMed Research, stated: “If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order.”

It's clear, at least in the scientific community, that unfettered development of AI may not be in humanity's best interest. Efforts are already underway to begin to formulate some of these rules to ensure the development of "ethically aligned" AI. The Institute of Electrical and Electronics Engineers presented their first draft of guidelines which they hope will steer developers in the correct direction.

"The development of artificial intelligence could spell the end of the human race." — Stephen Hawking

Additionally, the biggest names in tech are also coming together to self-regulate before government steps in. Researchers and scientists from large tech companies like Google, Amazon, Microsoft, IBM, and Facebook have already initiated discussions to ensure that AI is a benefit to humanity and not a threat.

Artificial Intelligence has a long way to go before it can get anywhere near advanced enough to pose a threat. However, progress is moving forward by leaps and bounds. One expert, Ray Kurzweil, predicts that computers will be smarter than humans by 2045 — a paradigm shift known as The Singularity. However, he does not think that this is anything to fear. Perhaps tech companies self-policing will be enough to ensure those fears are unfounded, or perhaps the government's hand will ultimately be needed. Whichever way you feel, it's not too early to begin having these conversations. In the meantime, though, try not to worry too much — unless, of course, you're a competitive gamer.

Share This Article