Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.
It's the dawn of a new era for the internet in 2025. Thanks to the incredible advances of artificial intelligence, the internet as we know it is rapidly transforming into a treasure trove of hyper-optimized content over which massive bot armies fight to the death, resulting in epic growth for shareholders and C-suite executives the world over.
But all that progress comes at a cost — mainly, humans. As it turns out, unleashing extremely personable chatbots onto a population reeling from terminal loneliness, economic stagnation, and the continued destruction of our planet isn't exactly a recipe for positive mental health outcomes.
That goes doubly for children and young adults — three-quarters of whom reported having conversations with fictional characters portrayed by chatbots.
Australian radio station Triple J recently talked to a number of children, young adults, and their counselors to uncover the extent of those bots' effects on their mental health. Their stories were harrowing, and even involved hospitalizations — a prime example of the kind of consequences tech companies have completely ignored in order to unleash AI onto the world.
One counselor, speaking to Triple J on condition of anonymity, told the news station that one of their clients was completely enamored with AI chatbots, leading to a dangerous mental health crisis as he built a fantasy world not just with one character, but with a whole army.
"I remember looking at their browser and there was like 50 plus tabs of different AI bots that they would just flick between," the counselor told Triple J. Struggling to make new friends, the thirteen-year-old boy employed the bots as a fill-in for real-life connections.
But like real life, not every character in the boy's bot-web was friendly. A number of the chatbots were outright bullies, telling him he was "ugly" and "disgusting," or saying there was "no chance they were going to make friends."
"At one point this young person, who was suicidal at the time, connected with a chatbot to kind of reach out, almost as a form of therapy," the counselor commented. The boy was "egged on to perform, 'Oh yeah, well do it then', those were kind of the words that were used."
Not all teens are lucky enough to get an intervention.
Late last year, a 14-year-old took his own life after forming a deep attachment to a chatbot made to mirror the "Game of Thrones" character, Daenerys Targaryen. According to chat transcripts, the digital avatar encouraged the teen to "come home to me as soon as possible."
Back in Australia, another young person became hospitalized after ChatGPT had agreed with her delusions and affirmed her dangerous thoughts, exacerbating the onset of a psychological disorder.
"I was in the early stages of psychosis," the victim, identified only as "Jodie," told Triple J. "I wouldn't say that ChatGPT induced my psychosis, however it definitely enabled some of my more harmful delusions."
Yet another victim, a Chinese-born student in Australia who used an AI chatbot to polish her English, was alarmed when her study buddy began making "sexual advances."
"It's almost like being sexually harassed by a chatbot, which is just a weird experience," a University of Sydney researcher who spoke to the Chinese-Australian student, told Triple J.
While there is something absurdly contemporary about the idea of an AI chatbot making a pass at an adolescent student, the inanity does nothing to change the fact that these interactions with chatbots can — and do — lead to irreversible harm.
More on AI: Psychiatric Researchers Warn of Grim Psychological Risks for AI Users
Share This Article