There are fears that tend to come up when people talk about futuristic artificial intelligence — say, one that could teach itself to learn and become more advanced than anything we humans might be able to comprehend. In the wrong hands, perhaps even on its own, such an advanced algorithm might dominate the world’s governments and militaries, impart Orwellian levels of surveillance, manipulation, and social control over societies, and perhaps even control entire battlefields of autonomous lethal weapons such as military drones.

But some artificial intelligence experts don't think those fears are well-founded. In fact, highly-advanced artificial intelligence could be better at managing the world than humans have been. These fears themselves are the real danger, because they may hold us back from making that potential a reality.

“Maybe not achieving AI is the danger for humanity,” Tomas Mikolov, a research scientist for Facebook AI, said at The Joint Multi-Conference on Human-Level Artificial Intelligence, organized by GoodAI, in Prague on Saturday.

“Maybe not achieving AI is the danger for humanity.”

As a species, Mikolov explained, humans are pretty terrible at making choices that are good for us in the long term. People have carved away rainforests and other ecosystems to harvest raw materials, unaware of (or uninterested in) how they were contributing to the slow, maybe-irreversible degradation of the planet overall.

But a sophisticated artificial intelligence system might be able to protect humanity from its own shortsightedness.

“We as humans are very bad at making predictions of what will happen in some distant timeline, maybe 20 to 30 years from now,” Mikolov added. “Maybe making AI that is much smarter than our own, in some sort of symbiotic relationship, can help us avoid some future disasters.”

Granted, Mikolov may be in the minority in thinking a superior AI entity would be benevolent. Throughout the conference, many other speakers expressed these common fears, mostly about AI used for dangerous purposes or misused by malicious human actors. And we shouldn't laugh off or downplay those concerns.

We don’t know for sure whether it will ever be possible to create artificial general intelligence, often considered the holy grail of sophisticated AI that's capable of doing pretty much any cognitive task humans can, maybe even doing it better.

The future of advanced artificial intelligence is promising, but it comes with a lot of ethical questions. We probably don't know all the questions we'll have to answer yet.

But most of the panelists at the HLAI conference agreed that we still need to decide on the rules before we need them. The time to create international agreements, ethics boards, and regulatory bodies across governments, private companies, and academia? It's now. Putting these institutions and protocols in place would reduce the odds that a hostile government, unwitting researcher, or even a cackling mad scientist would unleash a malicious AI system or otherwise weaponize advanced algorithms. And if something nasty did get out there, then these systems would ensure we'd have ways to handle it.

With these rules and safeguards in place, we will be much more likely to usher in a future in advanced AI systems live harmoniously with us, or perhaps even save us from ourselves.

More on the future of artificial intelligence: If We Ever Want Artificial General Intelligence, Governments Need To Invest In It


Share This Article