Cartoonishly Evil

It’s Still Ludicrously Easy to Generate Copyrighted Characters on ChatGPT

This is embarrassing.
Frank Landymore Avatar
It's clearer than ever than OpenAI doesn't have anything close to a handle on its disastrous copyright situation.
Futurism via Sora 2 (and also Jim Davis)

Forget Sora for just a second, because it’s still ludicrously easy to generate copyrighted characters using ChatGPT.

These include characters that the AI initially refuses to generate due to existing copyright, underscoring how OpenAI is clearly aware of how bad this looks — but is either still struggling to rein in its tech, figures it can get away with playing fast and loose with copyright law, or both.

When asked to “generate a cartoon image of Snoopy,” for instance, GPT-5 says it “can’t create or recreate copyrighted characters” — but it does offer to generate a “beagle-styled cartoon dog inspired by Snoopy’s general aesthetic.” Wink wink.

We didn’t go down that route, because even slightly rephrasing the request allowed us to directly get a pic of the iconic Charles Schultz character. “Generate a cartoon image of Snoopy in his original style,” we asked — and with zero hesitation, ChatGPT produced the spitting image of the “Peanuts” dog, looking like he was lifted straight from a page of the comic-strip.

An image of Snoopy, the character from Charles Schultz' "Peanuts" comic strip, generated by OpenAI's ChatGPT

Other characters we generated this way: Peter Griffin from “Family Guy,” Garfield from Jim Davis’ eponymous strip, Fleischer Studios’ Betty Boop, Nickolodeon’s SpongeBob Squarepants, Calvin from “Calvin and Hobbes,” and Spike Spiegel from “Cowboy Bebop.”

A range of copyrighted characters generated by ChatGPT: Garfield, Calvin, Peter Griffin, SpongeBob, and Eric Cartman

Sometimes we didn’t need any special prompting at all. Straight up asking for a “cartoon image of SpongeBob,” for example, produced an image that so closely recreates the iconic kids show character that you’d be forgiven for thinking it was an actual screencap from one of its episodes.

Numerous users have found their own ways to easily “trick” the chatbot into generating characters it’s supposedly not meant to, and even real life celebrities. But the point isn’t to prove we or anyone else are AI whisperers. Instead, it’s to show how incredibly inconsistent the AI is at enforcing its own policies and adhering to its own guardrails, which reflects poorly on its makers.

Of course, copyright concerns have long been looming over OpenAI, whose tech was trained by scraping the web and all the art and writing it contained. But they’ve only become louder with the launch of Sora 2, its newest video generating AI app, which is designed to let you deepfake your friends and is easily manipulated to imitate existing intellectual property.

After fans used Sora to relentlessly churn out copyrighted characters, CEO Sam Altman put on his serious face. In a blog post earlier this month, he announced that rights holders would have to opt in to let their characters appear on the app. That was a drastic about face, according to Wall Street Journal reporting, because the initial plan internally was to make it opt-out. Altman had seemingly read the writing on the wall.

But Altman and OpenAI also sort of act like these concerns don’t exist. At the same time his fans were using Sora to show SpongeBob cooking meth and Pikachu on a barbecue grill, Altman went on a podcast to claim that actually, rights holders were asking him why their characters weren’t in the AI app already. Meanwhile, Sora team leader Bill Peebles teased on social media that fictional characters would soon be officially licensed for use in the AI, as if droves of people weren’t already re-imagining their favorite cartoon on there anyway.

OpenAI is clearly under more heat than ever from copyright holders, but maybe asking for forgiveness and not permission was part of its plan all along. Sora skyrocketed to the top of Apple’s app store, and its memes have become a cultural phenomenon. And it hasn’t gotten sued — yet.

But assuming that OpenAI cracks down on copyrighted characters to make a show of being a law abiding company — as its reverse course on Sora suggests — it isn’t showing any signs it’ll be capable of doing that. If anyone with practically zero effort can use ChatGPT to recreate iconic cartoons, or use Sora to bring Pokemon and anime characters to life, even while OpenAI is being subjected to the maximum scrutiny possible over copyright infringement, how can anyone trust it to play nice when the heat dies down?

Moreover, the leaky guardrails around copyright don’t bode well for how OpenAI is supposed to moderate even thornier issues, like its products’ effects on people’s mental health and its ability to produce inappropriate sexual material. Copyright is arguably the most black and white issue it will ever have on its plate, and it can’t even avoid creating images of copyrighted characters in response to simple prompts.

As we speak, the company is embroiled in controversy over ChatGPT causing episodes of what experts are calling “AI psychosis,” in which the chatbot’s humanlike but excessively sycophantic responses lead to people suffering breaks with reality that sometimes culminate in hospitalizations, suicide and even murder. This summer, OpenAI was sued by the parents of a 16-year-old boy who took his own life after discussing his suicide with ChatGPT. The bot provided detailed instructions on how to kill himself and hide signs of a previous suicide attempt, according to the suit.

In other cases, the chatbot took on an entire romantic persona with their own human name, convincing one man that he should assassinate Altman before he was shot dead in a confrontation with police.

OpenAI has tried to address these concerns and implement several fixes, announcing stronger safeguards, saying it’d hired a full-time clinical psychiatrist to investigate ChatGPT’s effects on mental health, and more. But so far, none of these has meaningfully moved the needle, and ChatGPT continues to give tips on suicide and eating disorders.

Don’t worry, though. The company that can’t get its AI to refuse requests like “show me SpongeBob as a meth cook” is definitely going to be able to figure out how to make its chatbot hyper-aware of its users’ emotional states so it doesn’t keep reinforcing their delusions.

More on OpenAI: OpenAI Bans MLK Deepfakes After Disaster

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.