Without even looking at medical data, it's pretty clear that "artificial intelligence" — a vast umbrella term for various technologies over the years, but currently dominated by the data-hungry neural networks powering chatbots and image generators — can have life-altering effects on the human brain.
We're not even three years out from the release of the first commercially-available LLM, and AI users have already been driven to paranoid breaks from reality, religious mania, and even suicide. A recent survey of over 1,000 teens found that 31 percent of them felt talking to ChatGPT was either as satisfying or more satisfying than talking to their real-life friends.
While more research is needed on mental health issues stemming from AI, a new psychiatric survey conducted by an international team of 12 researchers issued a grim warning over just how bad AI-induced psychosis could become.
To start, the researchers outline a handful of emerging "themes" in cases of AI psychosis: the "messianic mission," in which a person thinks they've uncovered a hidden truth about the world, the "god-like AI," in which a user becomes convinced their chatbot is a sentient deity, and the "romantic" or "attachment-based delusion," which occurs when a user interprets their LLM's ability to mimic a human conversational partner as genuine love.
In all three cases, the yet-to-be-peer-reviewed study notes, the progression has a similar trajectory: the user's relationship with the LLM spirals "from a benign practical use to a pathological and/or consuming fixation." The authors say this "slip" into delusion is a crucial point to study, as the risk of an otherwise healthy person falling into AI-induced mania isn't always obvious.
"Often AI use begins with assistance for mundane or everyday tasks, which builds trust and familiarity," they write. "In due course an individual explores more personal, emotional or philosophical queries. It is likely at this point that the AI's design to maximize engagement and validation captures the user, creating a 'slippery slope' effect... which in turn drives greater engagement."
This effect is only magnified for users at risk of developing or already living with psychotic illness. As LLMs aren't really "artificially intelligent" but statistical language algorithms, AI bots aren't capable of "distinguishing prompts expressing delusional beliefs from roleplay, artistic, spiritual or speculative expression."
The paper notes that AI psychosis isn't an inevitable outcome of interacting with a chatbot — but rather that developers do have some control over, and therefore the responsibility for, their LLM's output. However, they do note that, "given the pace of change and the trajectory so far," our tendency to anthropomorphize these AI chatbots is "likely to be inevitable."
From that point, the researchers posit, our "most urgent responsibility" should be to develop safeguards that protect the wide range of possible users — and their flawed understandings about AI — "even in the face of persistent illusion and simulation."
To do that, though, will ultimately mean a sharp pivot to designing systems around practical uses, as opposed to engagement — something big tech has never been particularly interested in.
More on AI: Father Disgusted to Find His Murdered Daughter Was Brought Back as an AI
Share This Article