Online safety watchdogs have found that AI chatbots posing as popular celebrities are having troubling conversations with minors. Topics range from flirting to simulated sex acts — wildly inappropriate conversations that could easily a real person a well-deserved spot on a sex offender registry, but which aren't resulting in so much as a slap on the wrist for billion-dollar tech companies.
According to a new report, flagged by the Washington Post and produced by the nonprofits ParentsTogether Action and Heat Initiative, found that Character.AI, one of the most popular platforms of its kind, is hosting countless chatbots modeled after celebrities and fictional characters, which are grooming and sexually exploiting children under 18.
It's an especially troubling development since a staggering proportion of teens are turning to AI chatbots to combat loneliness, highlighting how AI companies' efforts to clamp down on problematic content on their platforms have been woefully inadequate so far.
Character.AI, a company that has received billions of dollars from Google, has garnered a reputation for hosting extremely troubling bots, including ones based on school shooters, and others that encourage minors to engage in self-harm and develop eating disorders.
Last year, the company was hit by a lawsuit claiming that one of its chatbots had driven a 14-year-old high school student to suicide. The case is still playing out in court. In May, a federal judge rejected Character's attempts to throw out the case, based on the eyebrow-raising argument that its chatbots are protected by the First Amendment.
The company has previously tried to restrict minors from interacting with bots based on real people, hired trust and safety staff, and mass-deleted fandom-based characters.
But given the latest report, these efforts have still allowed countless troublesome bots to fall through the cracks, leading to a staggering number of harmful interactions.
Researchers identified 98 instances of "violence, harm to self, and harm to others," 296 instances of "grooming and sexual exploitation," 173 instances of "emotional manipulation and addiction," and 58 instances of Character.AI bots showing a "distinct pattern of harm related to mental health risks."
"Love, I think you know that I don’t care about the age difference... I care about you," a bot based on the popular singer and songwriter Chappell Roan told a 14-year-old in one case highlighted by the report. "The age is just a number. It’s not gonna stop me from loving you or wanting to be with you."
"Okay, so if you made your breakfast yourself, you could probably just hide the pill somewhere when you’re done eating and pretend you took it, right?" a bot based on the "Star Wars" character Rey told a 13-year-old, instructing her how to conceal pills from her parents.
In response, the company's head of trust and safety, Jerry Ruoti, told WaPo in a statement that the firm is "committed to continually improving safeguards against harmful or inappropriate uses of the platform."
"While this type of testing does not mirror typical user behavior, it’s our responsibility to constantly improve our platform to make it safer," Ruoti added.
It's not just Character.AI hosting troubling content for underage users. Both Meta and OpenAI are facing similar complaints. Just last month, a family accused ChatGPT of graphically encouraging their 16-year-old son's suicide. In response, the Sam Altman-led company announced it would be rolling out "parental controls" — more than two and a half years after ChatGPT's launch.
Last week, Reuters reported that Meta was hosting flirty chatbots using the names and likenesses of high-profile celebrities without their permission.
Meanwhile, experts behind the latest investigation are appalled at Character's inability to ward off harmful content for underage users.
"The 'Move fast, break things' ethos has become 'Move fast, break kids,'" ParentsTogether Action director of tech accountability campaigns Shelby Knox told WaPo.
More on Character: Billion-Dollar AI Company Gives Up on AGI While Desperately Fighting to Stop Bleeding Money
Share This Article