
OpenAI raised eyebrows this month at its annual DevDay event when it announced that it will move to allow “mature apps” on its platforms.
“To maximize freedom for our users, only sexual content involving minors is considered prohibited,” reads an updated company document about what will be allowed, suggesting wide latitude for developers to use the company’s platform to craft naughty experiences for users.
As observers quickly pointed out, it was a pretty astonishing reversal for the company. Just two months ago, its CEO Sam Altman had boasted on a podcast that OpenAI hadn’t “put a sexbot avatar in ChatGPT yet” — even though, he conceded at the time, doing so would be sure to boost engagement.
Adult-oriented content has always been a large online sector, but mainstream tech companies have tended to keep it at arm’s length. Engaging with it requires that a company take positions on complex questions about moderation, ethics and agency that will never make everybody happy — and that’s more true than ever in the world of AI, where the core premise is that platforms can provide a near-infinite range of potentially controversial outputs in response users’ prompts.
Now, it seems like Altman is learning that reality in real time.
“Ok this tweet about upcoming changes to ChatGPT blew up on the erotica point much more than I thought it was going to!” he wrote in a lengthy response to the drama. “It was meant to be just one example of us allowing more user freedom for adults.”
Some of his new commitments were milquetoast, like promising that adult capabilities would be restricted to adult users. (Whether it will be able to prevent minor users from signing up as adults is an untested question.)
“As we have said earlier, we are making a decision to prioritize safety over privacy and freedom for teenagers,” he continued. “And we are not loosening any policies related to mental health. This is a new and powerful technology, and we believe minors need significant protection.”
That last part was clearly addressing the storm of criticism OpenAI is facing over a wave of cases in which ChatGPT has driven users into severe mental health crises that have ended in involuntary commitment, suicide, and murder.
“We also care very much about the principle of treating adult users like adults,” he wrote in the post. “As AI becomes more important in people’s lives, allowing a lot of freedom for people to use AI in the ways that they want is an important part of our mission.”
It’s a strikingly forceful reply. Even as those tragedies result in lawsuits and legislation aimed at the AI industry, Altman is laying down a sweepingly libertarian vision for OpenAI: that as long as non-minor users aren’t doing anything outright harmful, the company is going to be hands-off about moderating their usage of its products.
“It doesn’t apply across the board of course: for example, we will still not allow things that cause harm to others, and we will treat users who are having mental health crises very different from users who are not,” he wrote. “Without being paternalistic we will attempt to help users achieve their long-term goals.”
It’ll be fascinating to see how this all plays out in practice. But if one thing’s clear, it’s that Altman wants the best of both worlds: maximum freedom to provide what users want, with as little responsibility as possible for mediating what form those uses can take.
“But we are not the elected moral police of the world,” Altman wrote. “In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”
More on OpenAI: It Sounds Like OpenAI Really, Really Messed Up With Hollywood