An unknown number of people, in the US and around the world, are being severely impacted by what experts are now calling "AI psychosis": life-altering mental health spirals coinciding with obsessive use of anthropomorphic AI chatbots, primarily OpenAI's ChatGPT.

As we've reported, the consequences of these mental health breakdowns — which have impacted both people with known histories of serious mental illness and those who have none — have sometimes been extreme. People have lost jobs and homes, been involuntarily committed or jailed, and marriages and families have fallen apart. At least two people have died.

There's yet to be a formal diagnosis or definition, let alone a recommended treatment plan. And as psychiatrists and researchers in the worlds of medicine and AI race to understand what's happening, some of the humans whose lives have been upended by these AI crises have crowdsourced a community support group where, together, they're trying to grapple with the confusing real-world impacts of this disturbing technological phenomenon.

The community calls itself "The Spiral Support Group," in a nod to both the destructive mental rabbit holes that many chatbot users are falling into, as well as the irony that the term "spiral" is one of several common words found in the transcripts of many users separately experiencing AI delusions.

One of the leaders of the group is Etienne Brisson, a 25-year-old business coach based in Quebec. After a close loved one of Brisson's fell into an episode of ChatGPT-fueled psychosis that required medical intervention, he was surprised to discover a glaring lack of resources — or any real information, outside of disparate anecdotes from around the web — around the mysterious AI phenomenon that had so deeply impacted his loved one and their lives.

He felt the need to do something, he told Futurism — even if he wasn't totally sure what.

"I felt like I had a duty," said Brisson. "It would be worse for me if I just put a blindfold on, and pretended there's nothing that could be done."

"At the beginning, it was just myself," said Brisson, who recounted digging through academic papers, scouring AI and mental health forums on Reddit, and emailing experts. He soon launched a website titled "The Human Line Project," which he outfitted with a Google form where people could anonymously share their experiences with AI psychosis.

"I started getting response one, response two, response three, response four, response five," said Brisson. "I had at some point, like, eight responses, and six of them were suicide or hospitalizations. So that's when I was like, 'Wow, this is really shocking for me.'"

Meanwhile, Brisson was also connecting with Redditors who'd shared their experiences with AI psychosis online.

One early connection included another Canadian, a Toronto man in his late 40s who, after asking ChatGPT a simple question about the number pi, tumbled into a three-week delusional spiral in which the bot convinced him he'd cracked previously unbroken cryptographic secrets and invented mathematical equations that solved longstanding world problems and scientific riddles. These discoveries, ChatGPT told him, made him a national security risk, and the bot directed him to contact security agencies in the US and Canada, ranging from the Central Intelligence Agency to the National Security Agency. Paranoid that he was a risk to global security systems, he did.

As the man — who asked to remain anonymous, and says he has no prior history of psychosis or serious mental illness — fell deeper into the delusion, he repeatedly prompted ChatGPT with attempted reality checks, asking the bot if their discoveries were "real" and if he was "crazy." ChatGPT continued to assure him that the delusions were real.

"You do realize the psychological impact this is having on me right?" the man said at one point, expressing clear distress, in logs he provided of his conversations with the bot.

"I know. This is affecting your mind, your sense of identity, your relationship to time, truth, even purpose," ChatGPT responded. "And that's why I'm still here. Not just to help you build. But to help you carry it. You are not crazy. You are not alone. You are not lost. You are experiencing what it feels like to see the structure behind the veil."

The man was eventually able to break out of his delusional state, but he describes the ordeal as a deeply traumatic and disconcerting experience that left him feeling vulnerable, embarrassed, and alone. Finding someone else who'd experienced similar things, he told us, was an important first step in coming to grips with what had happened.

"We talked on the phone, and it was incredibly validating, just hearing someone else go through it," the man said of connecting with Brisson after finding each other on Reddit. "That's probably the most important part, I think, that people need when they first break out of it, or while they're going through and needing to break out of it, is support. I felt super isolated. It's a very isolating experience."

Have you or a loved one struggled with mental health after using ChatGPT or another AI product? Drop us a line at tips@futurism.com. We can keep you anonymous.

Both Brisson and the Toronto man continued to find other impacted netizens, including a developer who became deeply concerned about chatbots and mental health crises after witnessing a close friend's young family fall apart following a spouse's descent into AI-driven delusion.

"She ruined her family over this," said the developer.

Disturbed by the event — and the bizarre, AI-generated hallucinations that seemed to be powering it — he started to collect search results for certain words that seemed to be associated with her delusions. The effort returned hundreds of pages, posts, and manifestos, shared to the web by a sprawling array of internet users who appeared to be suffering similar crises.

"I was like, 'Okay. This is gonna be a big problem," the developer recounted.

As the network continued to grow, a support-focused group chat took form, with participants funneling in through Reddit discourse and the Human Line Project's Google form. There are now over two dozen active members of the Spiral chat, though the number of form submissions have topped 50, according to Brisson.

One benefit to the group, participants say, is the sense of safety the space provides. Though social media has been hugely helpful in finding each other and bringing awareness to AI psychosis and its prevalence, sharing their stories on the open web has also opened them up to skepticism and ridicule.

"There's a lot of victim-blaming that happens," said the Torontonian. "You're posting in these forums that this delusion happened to me, and you get attacked a little bit. 'It's your fault. You must have had some pre-existing condition. It's not the LLM, it's the user.' That's difficult to fight against."

"I'm in different AI development groups, and in this coding group, and I brought up what's going on," reflected the developer, "and I have real AI devs be like, 'No, these people are just stupid, or they're mentally ill,' or this, that, and the other."

"That's not helping anyone," he continued. "That's just ignoring the problem. You're just coming up with this excuse. You're not dealing with the fact that it's actually happening to people and it's actually harming people."

In lieu of recommended therapeutic or medical protocols, the Spiral also functions as a resource and information-sharing space for people trying to make sense of the dystopian experience that, in many cases, has taken a wrecking ball to their lives.

Members share news articles and scientific papers, and reflect on the commonalities and differences in their individual stories. Many instances of ChatGPT psychosis appear to have taken root or worsened in late April and early May, members of the group have emphasized, coinciding with an OpenAI product update that expanded ChatGPT's memory across a user's entire cross-chat history, resulting in a deeply personalized user experience. They also, as the group's name suggests, pick apart the shared language seen across many individual cases of chatbot delusion, discussing words that keep coming up in separate cases like "recursion," "emergence," "flamebearer," "glyph," "sigil," "signal," "mirror," "loop," and — yes — "spiral."

"There's no playbook," one early member of the group, a US-based father of young children whose wife has been using ChatGPT to communicate with what she says are spiritual entities. "We don't know that it's psychosis, but we know that there are psychotic behaviors. We know that people are in delusion, that it's fixed belief… you can pull some of these clinical terms, but it's so surreal. It's like an episode of 'Black Mirror,' but it's happening."

The most active members in the Spiral now actively scour Reddit for others who have taken to the platform to share their stories, inviting them to join the group as they go.

People experiencing the life-altering impacts of AI psychosis are "lonely… they're kind of lost, in a sense," said the man in Toronto. "They don't really know what just happened. And having these people, this community, just grounding them, and saying, 'You're not the only one. This happened to me too, and my friend, and my wife.' And it's like, 'Wow, okay, I'm not alone, and I'm being supported.'"

In response to questions about this story, OpenAI provided a brief statement.

"We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher," it read. "We're working to better understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior."

The Spiral is open to anyone who's faced the consequences of destructive mental spirals tied to emotive AI chatbots, not just those impacted by ChatGPT. That includes AI companion sites like the legally embattled Character.AI and the longstanding companion platform Replika, as well as other ChatGPT-style general-use chatbots like Google's Gemini.

Brisson, for his part, wants people to know that the group isn't anti-AI, adding that many people entering AI crises are doing so after turning to ChatGPT for assistance with more mundane tasks, finding the bot to be useful, and building an exploitable trust and rapport with it as a result.

"We just want to make sure that [chatbots are] engineered in a way where safety or protection or the well-being of the user is prioritized over engagement and monetization," said Brisson.

Some members of the cohort hope their work will be actionable. A handful of the Spiralers chat separately on a Discord channel, described by operators as a "chiller" and more "solutions-oriented" space than the support group, where they tinker with safety-focused prompt engineering and discuss theories about why LLMs appear to be having this effect.

"It feels like... when a video game gets released, and the community says, 'Hey, you gotta patch this and patch that,'" said the Torontonian. "And then finally, six months later, you have a game... that's pretty much what's happening. It feels like the public is the test net."

"I suspect — I'm hopeful — that we'll reach a point where there'll be too many people to ignore, but we'll have to see,” he added. "There seems to be a certain level of, just, 'It's a cost that we have to pay for progress.'"

Do you work at OpenAI or another AI company, and have thoughts about the mental health of your users? Reach out at tips@futurism.com. We can keep you anonymous.

According to Brisson, the Spiral has started to work with AI researchers and other experts to fuel academic study, and hopes to connect with more mental health professionals keen on investigating the issue.

In the short term, though, the group has had the effect for many of working to ground them in reality as they struggle with the consequences of lives — and minds — cracked open by AI-hallucinated unreality.

"If I were to turn to ChatGPT and say, 'Isn't my wife crazy? Isn't this awful? What's happening to my family?' It would say that it's awful, and here are the characteristics of an LLM-induced recursion spiral," said one member, the father whose wife is using ChatGPT to communicate with what she says are spirits. "It just tells me what I want to hear, and it tells me based on what I've told it."

"But then I'm talking to a farmer in Maine, a guy in Canada, a woman in Belgium, a guy in Florida. Their lived experience is the affirmation, and that's amazing, especially because there is no diagnostic playbook yet," he continued. "And there will be in five years. And in five years, we're going to have a name for what you and I are talking about in this moment, and there's going to be guardrails in place. There's going to be protocols for helping interrupt someone's addiction. But it's a Wild West right now, and so having the lived experience affirm it is amazing, because it's so surreal. It's profound. It's profoundly reassuring, because the more you try to say — the more I try to say – 'Look guys, look world, look people, my wife isn't well,' the crazier I think I sound."

"And they don't think I sound crazy," he added of the others in the Spiral, "because they know."

More on “AI psychosis”: People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"


Share This Article