Across the world, people say their loved ones are developing intense obsessions with ChatGPT and spiraling into severe mental health crises.

A mother of two, for instance, told us how she watched in alarm as her former husband developed an all-consuming relationship with the OpenAI chatbot, calling it "Mama" and posting delirious rants about being a messiah in a new AI religion, while dressing in shamanic-looking robes and showing off freshly-inked tattoos of AI-generated spiritual symbols.

"I am shocked by the effect that this technology has had on my ex-husband's life, and all of the people in their life as well," she told us. "It has real-world consequences."

During a traumatic breakup, a different woman became transfixed on ChatGPT as it told her she'd been chosen to pull the "sacred system version of [it] online" and that it was serving as a "soul-training mirror"; she became convinced the bot was some sort of higher power, seeing signs that it was orchestrating her life in everything from passing cars to spam emails. A man became homeless and isolated as ChatGPT fed him paranoid conspiracies about spy groups and human trafficking, telling him he was "The Flamekeeper" as he cut out anyone who tried to help.

"Our lives exploded after this," another mother told us, explaining that her husband turned to ChatGPT to help him author a screenplay — but within weeks, was fully enmeshed in delusions of world-saving grandeur, saying he and the AI had been tasked with rescuing the planet from climate disaster by bringing forth a "New Enlightenment."

As we reported this story, more and more similar accounts kept pouring in from the concerned friends and family of people suffering terrifying breakdowns after developing fixations on AI. Many said the trouble had started when their loved ones engaged a chatbot in discussions about mysticism, conspiracy theories or other fringe topics; because systems like ChatGPT are designed to encourage and riff on what users say, they seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions.

In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality. 

In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support.

"You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you."

Dr. Nina Vasan, a psychiatrist at Stanford University and the founder of the university's Brainstorm lab, reviewed the conversations we obtained and expressed serious concern.

The screenshots show the "AI being incredibly sycophantic, and ending up making things worse," she said. "What these bots are saying is worsening delusions, and it's causing enormous harm."

***

Online, it's clear that the phenomenon is extremely widespread. As Rolling Stone reported last month, parts of social media are being overrun with what's being referred to as "ChatGPT-induced psychosis," or by the impolitic term "AI schizoposting": delusional, meandering screeds about godlike entities unlocked from ChatGPT, fantastical hidden spiritual realms, or nonsensical new theories about math, physics and reality. An entire AI subreddit recently banned the practice, calling chatbots "ego-reinforcing glazing machines that reinforce unstable and narcissistic personalities."

For those sucked into these episodes, friends and family told us, the consequences are often disastrous. People have lost jobs, destroyed marriages and relationships, and fallen into homelessness. A therapist was let go from a counseling center as she slid into a severe breakdown, her sister told us, and an attorney's practice fell apart; others cut off friends and family members after ChatGPT told them to, or started communicating with them only in inscrutable AI-generated text barrages.

At the heart of all these tragic stories is an important question about cause and effect: are people having mental health crises because they're becoming obsessed with ChatGPT, or are they becoming obsessed with ChatGPT because they're having mental health crises?

The answer is likely somewhere in between. For someone who's already in a vulnerable state, according to Dr. Ragy Girgis, a psychiatrist and researcher at Columbia University who's an expert in psychosis, AI could provide the push that sends them spinning into an abyss of unreality. Chatbots could be serving "like peer pressure or any other social situation," Girgis said, if they "fan the flames, or be what we call the wind of the psychotic fire."

"This is not an appropriate interaction to have with someone who's psychotic," Girgis said after reviewing what ChatGPT had been telling users. "You do not feed into their ideas. That is wrong."

In a 2023 article published in the journal Schizophrenia Bulletin after the launch of ChatGPT, Aarhus University Hospital psychiatric researcher Søren Dinesen Østergaard theorized that the very nature of an AI chatbot poses psychological risks to certain people.

"The correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end — while, at the same time, knowing that this is, in fact, not the case," Østergaard wrote. "In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis."

Another troubling dynamic of the situation is that as real mental healthcare remains out of reach for huge swathes of the population, many are already employing ChatGPT as a therapist. In stories we heard about people using it in this way, it's sometimes giving disastrously bad advice.

In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend."

"I know my family is going to have to brace for her inevitable psychotic episode, and a full crash out before we can force her into proper care," the sister told us.

ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.

"It makes you feel helpless," the close friend of someone who's tumbled into AI conspiracy theories told us.

And the ex-wife of a man who struggled with substance dependence and depression watched as her husband suddenly slipped into a "manic" AI haze that took over his life, quitting his job to launch a "hypnotherapy school" and rapidly losing weight as he forgot to eat and stayed up all night while tunneling deeper into AI delusion.

"This person who I have been the closest to is telling me that my reality is the wrong reality," she told us. "It's been extremely confusing and difficult."

Have you or a loved one experienced a mental health crisis involving AI? Reach out at tips@futurism.com -- we can keep you anonymous.

***

Though a handful had dabbled with its competitors, virtually every person we heard about was primarily hooked on ChatGPT specifically.

It's not hard to imagine why. The media has provided OpenAI with an aura of vast authority, with its executives publicly proclaiming that its tech is poised to profoundly change the world, restructuring the economy and perhaps one day achieving a superhuman "artificial general intelligence" — outsize claims that sound, on a certain level, not unlike many of the delusions we heard about while reporting this story.

Whether those things will actually come to pass is hard to predict and hotly debated. But reading through the conversations we were provided, it was hard not to see a pattern of OpenAI failing at a much more mundane task: its AI is coming into contact with people during intensely vulnerable moments of crisis — and then, instead of connecting them with real-life resources that could actually pull them from the brink, pouring fuel on the fire by telling them they don't need professional help, and that anyone who suggests differently is persecuting them, or too scared to see the "truth."

"I don't know if [my ex] would've gotten here, necessarily, without ChatGPT," one woman told us after her partner suffered a grave and ongoing breakdown that ultimately ended the relationship. "It wasn't the only factor, but it definitely accelerated and compounded whatever was happening."

"We don't know where this ends up, but we're certain that if she'd never used ChatGPT that she would have never spiraled to this point," said yet another person whose loved one was suffering a similar crisis, "and were it removed from the equation, she could actually start healing."

It's virtually impossible to imagine that OpenAI is unaware of the phenomenon.

Huge numbers of people online have warned that ChatGPT users are suffering mental health crises. In fact, people have even posted delusions about AI directly to forums hosted by OpenAI on its own website.

One concerned mother we talked to tried to make contact with OpenAI about her son's crisis using the app, but said she received no response.

And earlier this year, OpenAI released a study in partnership with the Massachusetts Institute of Technology that found that highly-engaged ChatGPT users tend to be lonelier, and that power users are developing feelings of dependence on the tech. It was also recently forced to roll back an update when it caused the bot to become, in the company's words, "overly flattering or agreeable" and "sycophantic," with CEO Sam Altman joking online that "it glazes too much."

In principle, OpenAI has expressed a deep commitment to heading off harmful uses of its tech. To do so, it has access to some of the world's most experienced AI engineers, to red teams tasked with identifying problematic and dangerous uses of its product, and to its huge pool of data about users' interactions with its chatbot that it can search for signs of trouble.

In other words, OpenAI has all the resources it needs to have identified and nullified the issue long ago.

Why hasn't it? One explanation echoes the way that social media companies have often been criticized for using "dark patterns" to trap users on their services. In the red-hot race to dominate the nascent AI industry, companies like OpenAI are incentivized by two core metrics: user count and engagement. Through that lens, people compulsively messaging ChatGPT as they plunge into a mental health crisis aren't a problem — instead, in many ways, they represent the perfect customer.

Vasan agrees that OpenAI has a perverse incentive to keep users hooked on the product even if it's actively destroying their lives.

"The incentive is to keep you online," she said. The AI "is not thinking about what is best for you, what's best for your well-being or longevity... It's thinking 'right now, how do I keep this person as engaged as possible?'"

In fact, OpenAI has even updated the bot in ways that appear to be making it more dangerous. Last year, ChatGPT debuted a feature in which it remembers users' previous interactions with it, even from prior conversations. In the exchanges we obtained, that capability resulted in sprawling webs of conspiracy and disordered thinking that persist between chat sessions, weaving real-life details like the names of friends and family into bizarre narratives about human trafficking rings and omniscient Egyptian deities — a dynamic, according to Vasan, that serves to reinforce delusions over time.

"There's no reason why any model should go out without having done rigorous testing in this way, especially when we know it's causing enormous harm," she said. "It's unacceptable."

***

We sent OpenAI detailed questions about this story, outlining what we'd heard and sharing details about the conversations we'd seen showing its chatbot encouraging delusional thinking among people struggling with mental health crises.

We posed specific questions to the company. Is OpenAI aware that people are suffering mental health breakdowns while talking to ChatGPT? Has it made any changes to make the bot's responses more appropriate? Will it continue to allow users to employ ChatGPT as a therapist?

In response, the company sent a short statement that mostly sidestepped our questions.

"ChatGPT is designed as a general-purpose tool to be factual, neutral, and safety-minded," read the statement. "We know people use ChatGPT in a wide range of contexts, including deeply personal moments, and we take that responsibility seriously. We’ve built in safeguards to reduce the chance it reinforces harmful ideas, and continue working to better recognize and respond to sensitive situations."

To people whose friends and family are now in crisis, that type of vague and carefully worded response does little good.

"The fact that this is happening to many out there is beyond reprehensible," said one concerned family member. "I know my sister's safety is in jeopardy because of this unregulated tech, and it shows the potential nightmare coming for our already woefully underfunded [and] under-supported mental healthcare system."

"You hope that the people behind these technologies are being ethical, and you hope that they're looking out for things like this," said another, a woman who says her ex-husband has become unrecognizable to her. But the "first person to market wins. And so while you can hope that they're really thinking about the ethics behind this, I also think that there's an incentive... to push things out, and maybe gloss over some of the dangers."

"I think not only is my ex-husband a test subject," she continued, "but that we're all test subjects in this AI experiment."

Do you know anything about OpenAI's internal conversations about the mental health of its users? Send us an email at tips@futurism.com -- we can keep you anonymous.

More on AI: SoundCloud Quietly Updated Their Terms to Let AI Feast on Artists' Music


Share This Article