Getty Images
Artificial Intelligence
Artificial Intelligence

OpenAI Wants to Make Safe AI, but That May Be an Impossible Task

There may be no way to protect ourselves from AI.

Jolene CreightonMarch 15th 2018

True artificial intelligence is on its way, and we aren’t ready for it. Just as our forefathers had trouble visualizing everything from the modern car to the birth of the computer, it’s difficult for most people to imagine how much truly intelligent technology could change our lives as soon as the next decade — and how much we stand to lose if AI goes out of our control.

Fortunately, there’s a league of individuals working to ensure that the birth of artificial intelligence isn’t the death of humanity. From Max Tegmark’s Future of Life Institute to the Harvard Kennedy School of Government’s Future Society, the world’s most renowned experts are joining forces to tackle one of the most disruptive technological advancements (and greatest threats) humanity will ever face.

Perhaps the most famous organization to be born from this existential threat is OpenAI. It’s backed by some of the most respected names in the industry: Elon Musk, the SpaceX billionaire who founded Open AI, but departed the board this year to avoid conflicts of interest with Tesla; Sam Altman, the president of Y Combinator; and Peter Thiel, of PayPal fame, just to name a few. If anyone has a chance at securing the future of humanity, it’s OpenAI.

But there’s a problem. When it comes to creating safe AI and regulating this technology, these great minds have little clue what they’re doing. They don’t even know where to begin.

The Dawn of a New Battle

While traveling in Dubai, I met with Michael Page, the Policy and Ethics Advisor at OpenAI. Beneath the glittering skyscrapers of the self-proclaimed “city of the future,” he told me of the uncertainty that he faces. He spoke of the questions that don’t have answers, and the fantastically high price we’ll pay if we don’t find them.

The conversation began when I asked Page about his role at OpenAI. He responded that his job is to “look at the long-term policy implications of advanced AI.” If you think that this seems a little intangible and poorly defined, you aren’t the only one. I asked Page what that means, practically speaking. He was frank in his answer: “I’m still trying to figure that out.” 

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]
Click to View Full Infographic

Page attempted to paint a better picture of the current state of affairs by noting that, since true artificial intelligence doesn’t actually exist yet, his job is a little more difficult than ordinary.

He noted that, when policy experts consider how to protect the world from AI, they are really trying to predict the future. They are trying to, as he put it, “find the failure modes … find if there are courses that we could take today that might put us in a position that we can’t get out of.” In short, these policy experts are trying to safeguard the world of tomorrow by anticipating issues and acting today. The problem is that they may be faced with an impossible task.

Page is fully aware of this uncomfortable possibility, and readily admits it. “I want to figure out what can we do today, if anything. It could be that the future is so uncertain there’s nothing we can do,” he said.

Our problems don’t stop there. It’s also possible that we’ll figure out what we need to do in order to protect ourselves from AI’s threats, and realize that we simply can’t do it. “It could be that, although we can predict the future, there’s not much we can do because the technology is too immature,” Page said.

This lack of clarity isn’t really surprising, given how young this industry is. We are still at the beginning, and so all we have are predictions and questions. Page and his colleagues are still trying to articulate the problem they’re trying to solve, figure out what skills we need to bring to the table, and what policy makers will need to be in on the game.

As such, when asked for a concrete prediction of where humanity and AI will together be in a year, or in five years, Page didn’t offer false hope: “I have no idea,” he said.

However, Page and OpenAI aren’t alone in working on finding the solutions. He therefore hopes such solutions may be forthcoming: “Hopefully, in a year, I’ll have an answer. Hopefully, in five years, there will be thousands of people thinking about this,” Page said.

Well then, perhaps it’s about time we all get our thinking caps on.

Keep up. Subscribe to our daily newsletter.

I understand and agree that registration on or use of this site constitutes agreement to its User Agreement and Privacy Policy
Next Article
////////////