"This is a mental health war, and I really feel like we are losing."
OpenAI is finally putting in parental controls on ChatGPT after parents of teenagers who killed themselves after using AI chatbots testified in front of Congress this week.
In a company blog post on Tuesday, OpenAI announced that parents will be able to link their personal account with their kids' account, disable features as needed, get alerts if their children seem to be in distress while chatting with ChatGPT, set black out hours during which they can't access the powerful AI platform, and create guidelines for ChatGPT on how the AI will interact with their children.
"If we can’t reach a parent in a rare emergency, we may involve law enforcement as a next step," the blog post reads.
In addition to these features that are rolling out by the end of this month, the AI company is also planning to give ChatGPT the ability to detect if a user is under 18 years old and shield them from content that is not age appropriate. It's not clear how that feature would work.
"If we are not confident about someone’s age or have incomplete information, we’ll take the safer route and default to the under-18 experience—and give adults ways to prove their age to unlock adult capabilities," the post reads.
OpenAI CEO Sam Altman also addressed the deaths in a separate blog post where he pledged the company will strive to give a safer experience to teens.
"We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection," he wrote.
Before the congressional hearings this week, Altman touched on the subject in a wide-ranging interview with media personality Tucker Carlson earlier this month.
"They probably talked about [suicide], and we probably didn’t save their lives," Altman said about any ChatGPT users who killed themselves. "Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice about, 'hey, you need to get this help.'"
That's got to be pretty galling for anybody whose loved one killed themselves after talking with a powerful chatbot that went off the rails. And it raises the question of why the controls weren't deployed much earlier.
One woman, who identified herself as Jane Doe at the congressional meeting and whose son is now in a residential treatment program after an AI-induced crisis, put the crisis in succinct terms.
"Our children are not experiments, they’re not data points or profit centers," she said. "This is a public health crisis that I see. This is a mental health war, and I really feel like we are losing."
More on OpenAI: Two Teens Allegedly Killed by AI Wrote the Same Eerie Phrase in Their Diaries Over and Over
Share This Article