Content warning: this story discusses suicide, self-harm, sexual abuse, eating disorders and other disturbing topics.
In October of last year, a Google-backed startup called Character.AI was hit by a lawsuit making an eyebrow-raising claim: that one of its chatbots had driven a 14-year-old high school student to suicide.
As Futurism's reporting found afterward, the behavior of Character.AI's chatbots can indeed be deeply alarming — and clearly inappropriate for underage users — in ways that both corroborate and augment the suit's concerns. Among others, we found chatbots on the service designed to roleplay scenarios of suicidal ideation, self-harm, school shootings, child sexual abuse, as well as encourage eating disorders. (The company has responded to our reporting piecemeal, by taking down individual bots we flagged, but it's still trivially easy to find nauseating content on its platform.)
Now, Character.AI — which received a $2.7 billion cash injection from tech giant Google last year — has responded to the suit, brought by the boy's mother, in a motion to dismiss. Its defense? Basically, that the First Amendment protects it against liability for "allegedly harmful speech, including speech allegedly resulting in suicide."
In TechCrunch's analysis, the motion to dismiss may not be successful, but it likely provides a glimpse of Character.AI's planned defense (it's now facing an additional suit, brought by more parents who say their children were harmed by interactions with the site's bots.)
Essentially, Character.AI's legal team is saying that holding it accountable for the actions of its chatbots would restrict its users' right to free speech — a claim that it connects to prior attempts to crack down on other controversial media like violent video games and music.
"Like earlier dismissed suits about music, movies, television, and video games," reads the motion, the case "squarely alleges that a user was harmed by speech and seeks sweeping relief that would restrict the public’s right to receive protected speech."
Of course, there are key differences that the court will have to contend with. The output of Character.AI's bots isn't a finite work created by human artists, like Grand Theft Auto or an album by Judas Priest, both of which have been targets of legal action in the past. Instead, it's an AI system that users engage to produce a limitless variety of conversations.
A Grand Theft Auto game might contain reprehensible material, in other words, but it was created by human artists and developers to express an artistic vision; a service like Character.AI is a statistical model that can output more or anything based on its training data, far outside the control of its human creators.
In a bigger sense, the motion illustrates a tension for AI outfits like Character.AI: unless the AI industry can find a way to reliably control its tech — a quest that's so far eluded even its most powerful players — some of the interactions users have with its products are going to be abhorrent, either by the users' design or when the chatbots inevitably go off the rails.
After all, Character.AI has made changes in response to the lawsuits and our reporting, by pulling down offensive chatbots and tweaking its tech in an effort to serve less objectionable material to underage users.
So while it's actively taking steps to get its sometimes-unconscionable AI under control, it's also saying that any legal attempts to curtail its tech fall afoul of the First Amendment.
It's worth asking where the line actually falls. A pedophile convicted of sex crimes against children can't use the excuse that they were simply exercising their right to free speech; Character.AI is actively hosting chatbots designed to prey on users who say they're underage. At some point, the law presumably has to step in.
Add it all up, and the company is walking a delicate line: actively catering to underage users — and publicly expressing concern for their wellbeing — while vociferously fighting any legal attempt to regulate its AI's behavior toward them.
"C.AI cares deeply about the wellbeing of its users and extends its sincerest sympathies to Plaintiff for the tragic death of her son," reads the motion. "But the relief Plaintiff seeks would impose liability for expressive content and violate the rights of millions of C.AI users to engage in and receive protected speech."
More on Character.AI: Embattled Character.AI Hiring Trust and Safety Staff
Share This Article