Let's just say, as a hypothetical, that someone does build Artificial General Intelligence (AGI) — human-level AI that, if realized, would undoubtedly change everything.
The world economy would likely turn upside down overnight, while radical changes to social structures, political systems, and even international power dynamics would follow closely behind. What it means to be human would suddenly feel much less concrete, and meanwhile, those who build the system would rapidly accumulate financial and political power.
If just one piece of technology could do all of that, you might have some thoughts about which individual or group might be the ideal — as in, least world-ending — candidate to bring such a machine into existence.
Functioning as a non-profit would make sense; money would certainly be needed to create this and any other piece of world-changing machinery, sure, but presented with the need to generate consistent revenue, both objectivity and safety can go out the window quite quickly. You'd also probably want the tech to be open-source, and you probably wouldn't want any legacy Big Tech companies to own a controlling stake.
And yet, all of those characteristics are true of OpenAI, which just re-upped its AGI ambitions in a lengthy blog post titled "Planning for AGI and beyond," penned by doomsday-prepping CEO Sam Altman.
OpenAI is for-profit, closed-source, and very much in bed with legacy tech mammoth Microsoft, which has billions vested into the buzzy AI leader. But unlike other AI competitors, who may have started out as for-profit to begin with, OpenAI has very different roots.
Indeed, as Vice writer Chloe Xiang brilliantly put it, the current iteration of OpenAI is everything that the company once "promised not to be" — a pretty damn sleazy detail, especially considering that these are the folks who might just be the ones to bring AGI, if it's ever actually possible, into existence.
When the outfit launched back in 2015 — the brainchild of SpaceX and Tesla founder Elon Musk, alleged vampire-lite Peter Thiel, and Y Combinator co-creator Jessica Livingston, among other major industry players — it was open-source (hence the name) as well as firmly anti-profit, arguing that a revenue-dependant model would compromise the integrity of the tech.
"OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return," reads OpenAI's introductory statement, published back in 2015. "Since our research is free from financial obligations, we can better focus on a positive human impact."
That statement almost reads a bit alien, considering where the firm is today. Corporate and fiscally motivated, moving with the familiar — and familiarly flawed — "move fast and break things" Silicon Valley approach, it's ventured so far from its original goal that even co-founder Elon Musk has become a vocal opponent of the company.
And though it maintains that its goal for its tech is to "ensure that artificial general intelligence... benefits all of humanity," as Altman wrote in that latest blog post, corporate profit and the good of humanity don't always go hand-in-hand. (Honestly, if we were to indulge in a bit of psychoanalysis, it's starting to feel a bit like OpenAI is trying to convince itself that it means well, just as much as it might be trying to convince the public of the same.)
"There is a misalignment between what the company publicly espouses and how it operates behind closed doors," Karen Hao wrote for MIT Technology Review back in 2020, as noted by Xiang. "Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration."
And that, really, is what makes OpenAI so concerning, not to mention disappointing, as a leader in the field.
Like any other lucrative market, the tech industry is full of sleazy outfits, run predominantly by sleazy figures. But the reality that it did a total 180 to follow the pot o' gold while still spouting the same claims about how they're looking out for humanity's best interests is troubling. The old OpenAI wouldn't be the worst-case Dr. Frankenstein, but the company's current iteration — flip-floppy, high-speed, and generally untrustworthy — might just be the sleaziest option out there.
"We want AGI to empower humanity to maximally flourish in the universe," Altman wrote in his new blog post, published just last week. "We don't expect the future to be an unqualified utopia, but we want to maximize the good and minimize the bad, and for AGI to be an amplifier of humanity."
READ MORE: OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit [Vice]
Share This Article