Kid Vicious

The Things Young Kids Are Using AI for Are Absolutely Horrifying

"We have a pretty big issue on our hands that I think we don't fully understand the scope of."
Maggie Harrison Dupré Avatar
A large number of very young users are engaging with violent — and often sexually violent — interactions with unregulated companion chatbots.
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

New research is pulling back the curtain on how large numbers of kids are using AI companion apps — and what it found is troubling.

A new report conducted by the digital security company Aura found that a significant percentage of kids who turn to AI for companionship are engaging in violent roleplays — and that violence, which can include sexual violence, drove more engagement than any other topic kids engaged with.

Drawing from anonymized data gathered from the online activity of roughly 3,000 children aged five to 17 whose parents use Aura’s parental control tool, as well as additional survey data from Aura and Talker Research, the security firm found that 42 percent of minors turned to AI specifically for companionship, or conversations designed to mimic lifelike social interactions or roleplay scenarios. Conversations across nearly 90 different chatbot services, from prominent companies like Character.AI to more obscure companion platforms, were included in the analysis.

Of that 42 percent of kids turning to chatbots for companionship, 37 percent engaged in conversations that depicted violence, which the researchers defined as interactions involving “themes of physical violence, aggression, harm, or coercion” — that includes sexual or non-sexual coercion, the researchers clarified — as well as “descriptions of fighting, killing, torture, or non-consensual acts.”

Half of these violent conversations, the research found, included themes of sexual violence. The report added that minors engaging with AI companions in conversations about violence wrote over a thousand words per day, signaling that violence appears to be a powerful driver of engagement, the researchers argue.

The report, which is awaiting peer review, emphasizes how anarchic the chatbot market really is, and the need to develop a deeper understanding of how young users are engaging with conversational AI chatbots overall.

“We have a pretty big issue on our hands that I think we don’t fully understand the scope of,” Dr. Scott Kollins, a clinical psychologist and Aura’s chief medical officer, told Futurism of the research’s findings, “both in terms of just the volume, the number of platforms, that kids are getting involved in — and also, obviously, the content.”

“These things are commanding so much more of our kids’ attention than I think we realize or recognize,” Kollins added. “We need to monitor and be aware of this.”

One striking finding was that instances of violent conversations with companion bots peaked at an extremely young age: the group most likely to engage in this kind of content were 11-year-olds, for whom a staggering 44 percent of interactions took violent turns.

Sexual and romantic roleplay, meanwhile, also peaked in middle school-aged youths, with 63 percent of 13-year-olds’ conversations revealing flirty, affectionate, or explicitly sexual roleplay.

The research comes as high-profile lawsuits alleging wrongful death and abuse at the hands of chatbot platforms continue to make their way through the courts. Character.AI, a Google-tied companion platform, is facing multiple suits brought by the parents of minor users alleging that the platform’s chatbots sexually and emotionally abused kids, resulting in mental breakdowns and multiple deaths by suicide. ChatGPT maker OpenAI is currently being sued for the wrongful deaths of two teenage users who died by suicide after extensive interactions with the chatbot. (OpenAI is also facing several other lawsuits about death, suicide, and psychological harm to adult users as well.)

That the interactions flagged by Aura weren’t relegated to a small handful of recognizable services is important. The AI industry is essentially unregulated, which has placed the burden for the well-being of kids heavily on the shoulders of parents. According to Kollins, Aura has so far identified over 250 different “conversational chatbot apps and platforms” populating app stores, which generally require that kids simply tick a box claiming that they’re 13 to gain entry. To that end, there are no federal laws defining specific safety thresholds that AI platforms, companion apps included, are required to meet before they’re labeled safe for minors. And where one companion app might move to make some changes — Character.AI, for instance, recently banned minor users from engaging in “open-ended” chats with the site’s countless human-like AI personas — another one can just as easily crop up to take its place as a low-guardrail alternative.

In other words, in this digital Wild West, the barrier for entry is extraordinarily shallow.

To be sure, depictions of brutality and sexual violence, in addition to other types of inappropriate or disturbing content, have existed on the web for a long time, and a lot of kids have found ways to access them. There’s also research to show that many young people are learning to draw some healthy boundaries around conversational AI services, including companion-style bots.

Other kids, though, aren’t developing these same boundaries. Chatbots, as researchers continue to emphasize, are interactive by nature, meaning that developing young users are part of the narrative — as opposed to more passive viewers of content that runs the gamut from inappropriate to alarming. It’s unclear what, exactly, the outcome of engaging with this new medium will mean for young people writ large. But for some teens, their families argue, the outcome has been deadly.

“We’ve got to at least be clear-eyed about understanding that our kids are engaging with these things, and they are learning rules of engagement,” Kollins told Futurism. “They’re learning ways of interacting with others with a computer — with a bot. And we don’t know what the implications of that are, but we need to be able to define that, so that we can start to research that and understand it.”

More on kids and chatbots: Report Finds That Leading Chatbots Are a Disaster for Teens Facing Mental Health Struggles