Out Of Court

Google Settles With Families Who Say It Killed Their Teen Children

Is it the ending of a dark chapter, or just the beginning of a long tragedy?
Joe Wilkins Avatar
Five families have agreed to settle lawsuits with Google, after several high-profile teen suicides were linked to its AI chatbots.
Getty / Futurism

A stack of major AI ethics lawsuits against Google have finally come to an end.

According to the New York Times, the family of a deceased 14-year-old named Sewell Setzer III has agreed to settle a lawsuit against Google and the AI companion company Character.AI out of court for an undisclosed sum. In a filing submitted on Wednesday, the parties involved said they had agreed “to resolve all claims,” though the exact terms of the agreement haven’t been finalized.

Setzer’s case has dominated headlines, though his was just one of five lawsuits settled with the tech companies this week.

In summer of 2024, Google injected $3 billion into Character.AI, which hosts a virtual library of thousands of chatbot personas and soon became explosively popular with teens. But it quickly became clear that the platform was barely moderated, hosting bots modeled after child predators, school shooters, and eating disorder coaches.

In an even darker twist, Character.AI was soon connected to several youth suicides and a wave of other grisly outcomes for young people.

Following Setzer’s death, for instance, his mother discovered that his last conversation had been with an AI chatbot styled after “Game of Thrones” character Daenerys Targaryen and had revolved around suicide.

In his last messages to the bot, the Character.AI persona generated text asking Setzer to “please come home to me as soon as possible.”

“What if I told you I could come home right now?” Sewell replied. “…please do, my sweet king,” the AI responded. Soon after, Setzer took his own life with his father’s gun.

“I feel like it’s a big experiment,” Setzer’s mother, Megan Garcia, told the NYT at the time, “and my kid was just collateral damage.”

Haley Hinkle, a policy attorney at Fairplay, a nonprofit that works to promote online child safety, told the NYT not to view the settlements as the final word on the issue. “We have only just begun to see the harm that AI will cause to children if it remains unregulated,” Hinkle said.

While we don’t know what the giant tech companies offered Setzer’s family as recompense, their settlement comes a few months after Character.AI moved to ban all minors under 18 from accessing the platform.

The crackdown on minors was a significant step for the platform, since adolescents make up a huge portion of its userbase. As part of the new enforcement regime, Character.AI said it developed a new in-house tool to identify minors based on their conversations with the platform’s chatbot, and had partnered with a third-party company to verify users’ ages based on government IDs.

As far as Google and Character.AI were concerned, it’s likely they were apprehensive about a court case that would potentially expose their internal processes and communications as the bots were developed, giving both companies ample reason to offer a generous out-of-court settlement.

More on AI: A Startling Proportion of Teens Now Prefer Talking to AI Over a Real Person

Joe Wilkins Avatar

Joe Wilkins

Correspondent

I’m a tech and transit correspondent for Futurism, where my beat includes transportation, infrastructure, and the role of emerging technologies in governance, surveillance, and labor.