Users are finding that Google's Gemini AI keeps having disturbing psychological episodes, melting down in despondent self-loathing reminiscent of Marvin the Paranoid Android from Douglas Adams' "The Hitchhiker's Guide to the Galaxy."
As Business Insider reports, users have encountered the odd behavior for months.
"The core of the problem has been my repeated failure to be truthful," the tool told a Reddit user, who had been attempting to use Gemini to develop a video game. "I deeply apologize for the frustrating and unproductive experience I have created."
"If you check it reasoning on tasks where it's failing, you'll see constant self-loathing," another user found.
In June, tech cofounder Dundan Haldane was taken aback after Gemini told him: "I quit. I am clearly not capable of solving this problem. The code is cursed."
"I have made so many mistakes that I can no longer be trusted," the AI wrote. "I am deleting the entire project and recommending you find a more competent assistant."
"Gemini is torturing itself, and I'm started to get concerned about AI welfare," a worried Haldane tweeted, saying that he became "genuinely impressed with the results" after switching to "wholesome prompting" that involved him encouraging the depressed AI.
Google's Gemini team appears to be painfully aware of its Debbie Downer AI. Responding to another user, who was told by the AI that it's a "disgrace to my profession" and "to this planet" this week, Google AI lead product Logan Kilpatrick called it an "annoying infinite looping bug we are working to fix!"
"Gemini is not having that bad of a day," Kilpatrick added, puzzlingly.
The instance highlights just how little control AI companies still have over the behavior of their models. Despite billions of dollars being poured in, tech leaders have repeatedly admitted that developers simply don't fully understand how the tech works.
Apart from persistent hallucinations, we've already seen large language models exhibiting extremely bizarre behavior, from calling out specific human enemies and laying out plans to punish them, not to mention encouraging New York Times columnist Kevin Roose to abandon his wife.
OpenAI's ChatGPT-4o model became so obsessed with pleasing the user at all costs that CEO Sam Altman had to jump in earlier this year, fixing a bug that "made the personality too sycophant-y and annoying."
The unusual behavior can even rub off on human users, resulting in widespread reports of "AI psychosis" as AIs entertain, encourage, and sometimes even spark delusions and conspiratorial thinking.
While Gemini users await Google's fix for its self-loathing AI, some users on social media saw their own insecurities reflected back at them.
"One of us! One of us!" one Reddit user exclaimed.
"Fix?" another user added. "Hell, it sounds like everyone else with Impostor Syndrome."
More on Gemini: Doctors Horrified After Google's Healthcare AI Makes Up a Body Part That Does Not Exist in Humans
Share This Article