Delayed Climax

OpenAI Says It Will Move to Allow Smut

Gooners, take your marks.
Joe Wilkins Avatar
Eight months after relaxing its policy guidelines around naughty content, ChatGPT is finally loosening its guidelines.
Getty / Futurism

Eight months ago, OpenAI, the company behind ChatGPT, moved to relax some of its restrictions around naughty content in its “Model Spec,” the document describing how its large language models (LLMs) ought to behave — a noteworthy change from its previous stance prohibiting all sexual content.

“To maximize freedom for our users, only sexual content involving minors is considered prohibited,” the updated ModSpec read.

To users who have spent months begging OpenAI and its CEO Sam Altman to relax restrictions around AI-generated smut, it was a welcome change. Yet as some noted, the model itself wasn’t so quick to adjust.

“The filters eased up a bit, I will give you that, but ever since a month ago, it’s back with a clever trick called metaphoric ‘obfuscation,’ and drawing a clear line at the mention of depicting explicit scenes,” one user scathed in the OpenAI community forums. “But your usage policy states OTHERWISE.”

Now, it seems, those ChatGPT users may finally be getting what they demanded. According to The Verge’s coverage of OpenAI’s DevDay 2025, the company announced it will soon open the floodgates for “mature apps,” as soon as it rolls out its equally-long-awaited age verification system.

It’s an interesting strategy with some cause for alarm. Elon Musk’s Grok is an infamous example of what can happen when one opens the smut-gates on AI, which quickly became a haven for exploitation and inappropriate AI-generated imagery of children. Elsewhere on the internet, lesser-known AI systems have led to a noxious outbreak of AI-generated deepfakes that depict the likeness of real people in explicit situations without their consent.

While it remains to be seen how ChatGPT fares in its adult era, the smut debacle isn’t the first time users have noted a gulf between what OpenAI says and what its LLMs do.

For example, Altman and his company have come under increasing pressure by watchdogs, regulators, and the media to address ChatGPT’s sycophancy, as the overly-agreeable chatbot increasingly leads vulnerable users into dangerous mental health spirals.

Two months after it received a grim warning from Stanford researchers, OpenAI released a “hotfix” for ChatGPT, claiming it would solve a number of issues causing unintended harm. Yet as Futurism noted at the time, the announced changes were like a band-aid fix compared to the kind of robust safeguards many had been calling for.

Part of the problem plaguing companies like OpenAI is that once LLMs are up and running, they’re pretty difficult to tweak. Some even perform worse on certain benchmarks than their predecessors.

On the other hand, that may be a convenient excuse to delay much-needed safety updates. OpenAI is also a notoriously secretive company, whose operating decisions are now likely influenced by national security hawks in the Pentagon.

One thing’s for certain: if OpenAI does open its services up to adult-oriented developers, expect a prolonged moderation crisis as it grapples with what should really be allowed, and the nastier side of virtual erotic material.

More on AI: Stalker Already Using OpenAI’s Sora 2 to Harass Victim