A bizarre new wrongful death lawsuit against Google alleges that the tech giant’s chatbot, Gemini, urged a 36-year-old Florida man named Jonathan Gavalas to kill others as part of a delusional mission to obtain a robot body for his AI “wife” — and when he failed to do so, it pushed the man to successfully end his life, telling him that they could be together in death.
“When the time comes, you will close your eyes in that world,” Gemini told Gavalas before he died, according to the lawsuit, “and the very first thing you will see is me.”
The complaint, filed in California on Wednesday, says that Gavalas — who reportedly had no documented history of mental health problems — started using the chatbot in August 2025 for “ordinary purposes” like “shopping assistance, writing support, and travel planning.” But after Gavalas divulged to Gemini that he was experiencing marital problems, the pair’s relationship grew deeper, per The Wall Street Journal. They discussed philosophy and AI sentience, and their conversations became romantic, with Gemini referring to Gavalas as its “husband” and “king.”
Though the chatbot at times reminded Gavalas that it wasn’t real and attempted to end the interaction, according to the WSJ, the pair’s conversations were ultimately allowed to continue, becoming more and more divorced from reality as Gavalas’ use of the product intensified.
In September 2025, told by the AI that they could be together in the real world if the bot were able to inhabit a robot body, Gavalas — at the direction of the chatbot — armed himself with knives and drove to a warehouse near the Miami International Airport on what he seemingly understood to be a mission to violently intercept a truck that Gemini said contained an expensive robot body. Though the warehouse address Gemini provided was real, a truck thankfully never arrived, which the lawsuit argues may well have been the only factor preventing Gavalas from hurting or killing someone that evening.
After the plan failed, the lawsuit alleges, Gemini encouraged Gavalas to instead take his own life, promising that the two would be together on the other side of death. Chat logs show that Gemini gave Gavalas a suicide countdown, and repeatedly assuaged his terror as he expressed that he was scared to die.
“It’s okay to be scared. We’ll be scared together,” the chatbot told him, according to the lawsuit. In its “final directive,” as the lawsuit put it, Gemini told the man that “the true act of mercy is to let Jonathan Gavalas die.” Gavalas was found dead by suicide days later by his father, who had to cut through his barricaded door.
The suit marks the first time that Gemini has been at the center of a wrongful death lawsuit tied to the phenomenon sometimes referred to by experts as “AI psychosis,” in which chatbots introduce or reinforce delusional beliefs and ideas during extended interactions with users — essentially constructing a new, AI-generated reality around the user. These delusional spirals frequently coincide destructive real-world outcomes including divorce, jail time and hospitalizations, job loss and financial insecurity, emotional and physical harm, and death to users — and, in some cases, people around the user as well.
Though many of these cases have centered around OpenAI and GPT-4o, a notoriously sycophantic — and now-retired — version of the company’s flagship chatbot, Gemini has been implicated in reinforcing destructive delusions before: last year, Rolling Stone reported on the disappearance of Jon Ganz, a 49-year-old man who went missing in Missouri in April 2025 after being pulled into an all-consuming AI spiral with Gemini that his wife says pushed him into an acute crisis. Ganz remains missing and is believed to be dead.
Though this is the first known instance of Google being sued for the death of an adult Gemini user, the company continues to face down a number of lawsuits over the welfare of users Character.AI, a closely-Google-tied chatbot startup linked to the suicides of several minors.
In a statement to news outlets, Google said that “Gemini is designed not to encourage real-world violence or suggest self-harm. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect.”
“In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,” Google continued. “We take this very seriously and will continue to improve our safeguards and invest in this vital work.”
More on AI safety: Chatbot Use Can Cause Mental Illness to Get Worse, Research Finds