Three years ago, OpenAI cofounder and former chief scientist Ilya Sutskever raised eyebrows when he  declared that the era's most advanced neural networks might have already become "slightly conscious."

That flair for hype is on full display at Sutskever's new venture, another AI outfit sporting the un-subtle name of "Safe Superintelligence."

And if you were fretting that OpenAI's business model was suffering from Alice-in-Wonderland logic, Sutskever's new project will have you all the way through the looking glass. As the Financial Times flags, the company just raised another $1 billion, adding to previous investments from deep-pocketed investors at Andreessen Horowitz and Sequoia Capital and bringing its valuation up to $30 billion.

The wild thing? It's done all that — including attaining a valuation higher than Warner Bros, Nokia, or the Dow Chemical Company — without offering any product whatsoever. In fact, Sutskever has previously bragged, it will never offer a product — until, that is, it drops a fully-formed superintelligent AI at some unspecified point in the future.

"This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then," the erstwhile OpenAI-er told Bloomberg back when he first launched the operation. "It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race."

It's not unusual, of course, for venture capitalists to invest in companies that don't yet have products — but throwing billions at one whose sole purpose is to create something that might not even exist within our lifetimes is a stretch, even by VC standards.

Despite Sutskever's amply-derided insistence to the contrary, there is little reason to believe that AI researchers are anywhere near creating artificial general intelligence (AGI), much less the type of system that surpasses human cognition. While the timeline for reaching AGI is debatable, some experts argue that this "singularity," as some call it, may never be achieved — much less on a timeline that would make investors happy.

As the FT points out, Safe Superintelligence's valuation ballooned from $5 billion to $30 billion since its launch last June. Over that time period, the concept of AGI has grown ever larger in the popular imagination as OpenAI CEO Sam Altman keeps teasing that OpenAI is on the precipice of achieving it — even though he's done little to back up those claims. (By the way, it's worth recalling that Sutskever left OpenAI last summer after a failed coup to oust Altman.)

On Safe Superintelligence's website, there is no explanation of what will set Sutskever's company apart from others looking to achieve similar goals. Instead, it speaks in familiar AI industry platitudes, boasting that it'll "approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs" and "plan[s] to advance capabilities as fast as possible while making sure our safety always remains ahead."

Maybe Sutskever's wildest predictions will come true, and he'll usher in a spectacularly powerful and perfectly risk-free superintelligence. But unless he can do it quickly, investors are sure to come knocking.

More on artificial "intelligence": OpenAI Researchers Find That Even the Best AI Is "Unable To Solve the Majority" of Coding Problems


Share This Article