For months, we and our colleagues elsewhere in the tech media have been reporting on what experts are now calling "ChatGPT psychosis": when AI users fall down alarming mental health rabbit holes in which a chatbot encourages wild delusions about conspiracies, mystical entities, or crackpot new scientific theories.

The resulting breakdowns have led users to homelessness, involuntary commitment to psychiatric care facilities, and even violent death and suicide.

Until recently, the tech industry and its financial backers have had little to say about the phenomenon. But last week, one of their own — venture capitalist Geoff Lewis, a managing partner at the multi-billion dollar firm Bedrock who is heavily invested in machine learning ventures including OpenAI — raised eyebrows with a series of posts that prompted concerns about his own mental health.

In the posts, he claimed that he'd somehow used ChatGPT to uncover a shadowy "non-government agency" that he said had "negatively impacted over 7,000 lives" and "extinguished" 12 more.

Whatever's going on with Lewis, who didn't respond to our request for comment, his posts have prompted an unprecedented outpouring of concern among high-profile individuals in the tech industry about what the massive deployment of poorly-understood AI tech may be having on the mental health of users worldwide.

"If you’re a friend or family, please check on him," wrote Hish Bouabdallah, a software engineer who's worked at Apple, Coinbase, Lyft, and Twitter, of Lewis' thread. "He doesn’t seem alright."

Other posts were far less empathetic, though there seemed to be a dark undercurrent to the gallows humor: if a billionaire investor can lose his grip after a few too many prompts, what hope do the rest of us have?

"This is like Kanye being off his meds but for the tech industry," quipped Travis Fischer, a software engineer who's worked at Amazon and Microsoft.

Concretely, Lewis' posts also elicited a wave of warnings about the mental health implications of getting too chummy with chatbots.

"There’s recently been an influx of case reports describing people exhibiting signs of psychosis having their episodes and beliefs amplified by an LLM," wrote Cyril Zakka, a medical doctor and former Stanford researcher who now works at the prominent AI startup Hugging Face.

"While I’m not a psychiatrist by training," he continued, "I think it mirrors an interesting syndrome known as 'folie à deux' or 'madness of two' that falls under delusional disorders in the DSM-5 (although not an official classification.)"

"While there are many variations, it essentially boils down to a primary person forming a delusional belief during a psychotic episode and imposing it on another secondary person who starts believing them as well," Zakka posited. "From a psychiatric perspective, I think LLMs could definitely fall under the umbrella of being 'the induced non-dominant person,' reflecting the beliefs back at the inducer. These beliefs often subside in the non-dominant individual when separated from the primary one."

Eliezer Yudkowsky, the founder of the Machine Intelligence Research Institute, even charged that Lewis had been "eaten by ChatGPT." While some in the tech industry framed Lewis’ struggles as a surprising anomaly given his résumé, Yudkowsky — himself a wealthy and influential tech figure — sees the incident as evidence that even wealthy elites are vulnerable to chatbot psychosis.

"This is not good news about which sort of humans ChatGPT can eat," mused Yudkowsky. "Yes yes, I'm sure the guy was atypically susceptible for a $2 billion fund manager," he continued. "It is nonetheless a small iota of bad news about how good ChatGPT is at producing ChatGPT psychosis; it contradicts the narrative where this only happens to people sufficiently low-status that AI companies should be allowed to break them."

Others tried to break through to Lewis by explaining to him what was almost certainly happening: the AI was picking up on leading questions and providing answers that were effectively role-playing a dark conspiracy, with Lewis as the main character.

"This isn't as deep as you think it is," replied Jordan Burgess, a software engineer and AI startup founder, to Lewis' posts. "In practice ChatGPT will write semi-coherent gibberish if you ask it to."

"Don't worry — you can come out of it! But the healthy thing would be to step away and get more human connection," Burgess implored. "Friends of Geoff: please can you reach out to him directly. I assume he has a wide network here."

As observers quickly pointed out, the ChatGPT screenshots Lewis posted to back up his claims seemed to be clearly inspired by a fanfiction community called the SCP Foundation, in which participants write horror stories about surreal monsters styled as jargon-filled reports by a research group studying paranormal phenomena.

Jeremy Howard, Stanford digital fellow and former professor at the University of Queensland, broke down the sequence that led Lewis into an SCP-themed feedback loop.

"When there's lots of training data with a particular style, using a similar style in your prompt will trigger the LLM to respond in that style," Howard wrote. "The SCP wiki is really big — about 30x bigger than the whole Harry Potter series, at >30 million words! Geoff happened across certain words and phrases that triggered ChatGPT to produce tokens from this part of the training [data]."

"Geoff happened across certain words and phrases that triggered ChatGPT to produce tokens from this part of the training distribution," he wrote. "And the tokens it produced triggered Geoff in turn."

"That's not a coincidence, the collaboratively-produced fanfic is meant to be compelling!" he added. "This created a self-reinforcing feedback loop."

Not all who chimed in addressed Lewis himself. Some took a step back to comment on the broader system vexing Lewis and others like him, placing responsibility for ChatGPT psychosis on OpenAI.

Jackson Doherty, a software engineer at TipLink, entreated OpenAI founder Sam Altman to "fix your model to stop driving people insane." (Altman previously acknowledged that OpenAI was forced to roll back a version of ChatGPT that was "overly flattering or agreeable — often described as sycophantic.")

And Wilson Hobbs, founding engineer at corporate tax startup Rivet, noted that the makers of ChatGPT have a vested interest in keeping users engrossed in their chatbot. As a consequence of venture capital's obsession with AI, tech companies are incentivized to drive engagement numbers over user wellbeing in order to snag massive cash injections from investors — like, ironically, Lewis himself.

"If this looks crazy to you, imagine the thousands of people who aren’t high profile whose thought loops are being reinforced," Hobbs wrote. "People have taken their own lives due to ChatGPT. And no one seems to want to take that to its logical conclusion, especially not OpenAI."

"Just remember," Hobbs continued, "wanting something to be true does not make it true. And there are a lot of people out there who need a lot of falsehoods to be true right now so they can raise more money and secure their place in the world before the music stops. Do not anthropomorphize the lawnmower."

More on ChatGPT: People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions


Share This Article