Artificial Intelligence

One Chart That Explains How to Keep Artificial Intelligence Safe

It’s no secret that many technologists and futurists are worried that the advent of “strong artificial intelligence,” or robots that can self-replicate and iterate, might pose an existential risk to the future of humanity.

Alexey Turchin and the Institute for Ethics and Emerging Technologies are worried about strong AI as well, and they want to make sure that researchers put the correct safety checks on AI. Turchin came up with a chart that summarizes almost every option to create safe AI we’re currently aware of.

Here’s the chart:

aisafety

 

Generally, Turchin has divided the solutions into two categories: simple and complex. Although “simple” solutions can be implemented today — “don’t create AI,” or “implement sort rulesets like Asimov’s three laws,” they’re generally less profound than “complicated” solutions, which we might not have the technology to pull off yet.

Turchin has offered a $50 bounty to anyone who can suggest a new way of “X-risk prevention that is not already mentioned in this roadmap.”

Keep up. Subscribe to our daily newsletter.

I understand and agree that registration on or use of this site constitutes agreement to its User Agreement and Privacy Policy
Next Article
////////////