One of Microsoft's top AI bosses is concerned that the tech is fueling a massive wave of "AI psychosis."
Microsoft AI CEO Mustafa Suleyman told British newspaper The Telegraph that "to many people," talking to a chatbot is a "highly compelling and very real interaction."
"Concerns around 'AI psychosis,' attachment and mental health are already growing," he added. "Some people reportedly believe their AI is God, or a fictional character, or fall in love with it to the point of absolute distraction."
Perhaps most concerning: Suleyman told the paper that he fears the breakdowns are not "limited to those who are already at risk of mental health issues."
To Suleyman's credit, he's right on the money. As Futurism has reported extensively, we've already seen countless instances of users being driven into spiraling delusions, mixing spiritual mania and supernatural fantasies into a toxic miasma that psychiatrists say is leading to grim real-world outcomes.
The spiraling users' friends and families have been forced to watch their loved ones grow convinced that they're talking to a sentient being, a devastating trend that can have severe consequences — including death, in extreme cases.
It's such a widespread phenomenon that people are forming support groups. Even a prominent OpenAI investor was seemingly drawn into a ChatGPT-fueled mental health crisis.
While acknowledging the issue outright is an important step in the right direction, it remains to be seen what actions Suleyman and Microsoft will take to address the disturbing phenomenon.
If OpenAI is anything to go by, the rise of AI psychosis is putting the creators of AI in a bind: they don't want the PR headache, but obsessed users are loyal users — and cutting them off from an overly flattering AI buddy doesn't go over well.
Earlier this month, the Sam Altman-led company deprecated its popular GPT-4o AI model following the launch of its successor, GPT-5, leading to an enormous amount of backlash from users who had grown attached to the predecessor's much warmer and sycophantic tone.
The outcry highlighted a worrying trend, with Altman admitting that the company had "totally screwed up" the launch.
"If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models," Altman tweeted at the time. "It feels different and stronger than the kinds of attachment people have had to previous kinds of technology."
Instead of coming up with meaningful guardrails, safety monitoring, or human counseling-based solutions, OpenAI gave in immediately, reinstating GPT-4o, and even announcing that GPT-5 would itself be made more sycophantic.
A similar situation is seemingly playing out at Microsoft, a firm whose relationship with OpenAI started out as a mutually beneficial, multi-billion-dollar partnership, but has more recently grown sour.
Suleyman told The Telegraph that researchers are being "inundated with queries from people asking, ‘Is my AI conscious?’ What does it mean if it is? Is it okay that I love it?"
"The trickle of emails is turning into a flood," he added.
While Suleyman argued that there should be hard-coded guardrails to stop these delusions, he said that his "central worry" is that people will "soon advocate for AI rights."
His comments are yet another sign that tech leaders are growing concerned with how their offerings are negatively affecting the mental health of their users.
Whether Microsoft will jump into action and meaningfully address the crisis is a different matter. The AI industry is going through a crunch time, with investors growing wary of enormous capital expenditures with no profits in sight.
In other words, OpenAI and Microsoft are financially obligated to their shareholders to continue to fuel their users' delusions — a dystopian sci-fi story that's playing out in real time.
More on AI psychosis: Support Group Launches for People Suffering "AI Psychosis"
Share This Article