Worried about ChatGPT creator OpenAI and the implications of its powerful, privately-owned, and occasionally unhinged AI technology? Fair. But according to company CEO Sam Altman, you should maybe turn your attention to OpenAI's burgeoning — and potentially even less ethical — rivals, instead.
"A thing that I do worry about is... we're not going to be the only creator of this technology," Altman told ABC News in an interview last week. "There will be other people who don't put some of the safety limits that we put on it."
"Society, I think, has a limited amount of time to figure out how to react to that," he continued. "How to regulate that, how to handle it."
Altman does have a point, to some degree. It's hard to argue that we're not in an AI arms race, and in that competitive and breakneck landscape, a lot of companies and superpowers out there are likely to prioritize both power and profit over safety and ethics. It's also true that AI tech is rapidly outpacing government regulation, despite the many billions being poured into the software. No matter how you shake it, that's a dangerous combination.
That said, though, it'd be easier to take Altman's line seriously if AI, even with the best intentions and guardrails, weren't so inherently fickle. These algorithms are unpredictable, and it's impossible to know how AI machines and their safeguards will play out when their products go into public hands. (And some, like OpenAI's largest partner Microsoft, allegedly tested the extremely chaotic Bing AI in India, ran into serious problems, and then released it in the US anyway.)
There’s also the reality that, for all its White Knight posturing, OpenAI, which was founded as a non-profit, open-source firm but has since done a complete 180, won't actually tell anyone the specifics of how its models and their guardrails actually function. As of today, OpenAI is definitively closed, not open — and according to the "technical paper" that was released alongside its newly-unveiled, next-generation GPT-4, the firm is intent on keeping it that way, writing in that document that due to "both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar."
So, basically, OpenAI is arguing, at the same time, that it can't reveal proprietary information, including that of its safety measures, 1. because it could cost it money and 2. it would offer up the inner workings of its tech to any potential bad actors, like those that Altman warns of in the ABC interview. We as the public just have to trust two very big things: that OpenAI is self-auditing and self-regulating properly, and that its profit motive never conflicts with humanity's best interest. You know, because those are always aligned.
To that end, while Altman has and is continuing to advocate for regulation, he and OpenAI are still operating without it. As of now, it's up to OpenAI to define what ethics and safety mean and should be — and by keeping its models closed, the company is itself asking the public to do a serious trust fall.
We're certainly not saying that OpenAI is a bad actor. Nor are Microsoft, Google, and Facebook, fellow leaders in the game. And no matter what, there are almost definitely going to be much worse actors out there, and Microsoft and OpenAI in particular have worked to make safety updates to their increasingly available AI services in real-time. It's also probably a good thing that, as the CEO remarked elsewhere in the interview, he and his colleagues are publicly voicing concerns about the tech that they're working to build.
"I think people should be happy that we're a little bit scared of this," Altman continued in the ABC interview. "I think if I said that I were not, you should either not trust me, or be very unhappy that I'm in this job."
What we are saying, though, is that introducing powerful AI systems to the public is shaping up to be a naturally chaotic process altogether, and when it comes to building trust with OpenAI, it would be a lot easier if there was more transparency. At the end of the day, it's one thing to say you're doing all the right things. It's another to show it, and while OpenAI continues to position itself as the good guy in the oncoming storm, it's important to remember that the very much closed company is doing much more telling than showing.
And even with the best intentions, or the best guardrails out there, burgeoning tech often has unpredictable — or entirely predictable but inevitable — negatives regardless. Altman's warning has its merits, but maybe take it with a grain of salt.
READ MORE: OpenAI CEO Sam Altman warns that other A.I. developers working on ChatGPT-like tools won's put on safety limits—and the clock is ticking [Fortune]
Share This Article