A new wrongful death lawsuit filed in a U.S. federal court against Google’s Gemini has alleged that the AI chatbot fueled a man’s delusional thoughts and pushed him to carry a violent attack before committing suicide.
According to a report in Associated Press (AP), the victim, identified as 36-year-old Jonathan Gavalas, a resident of Jupiter in Florida, allegedly took his life in October last year, after he, fueled by his delusions, reached Miami airport to stage an attack. Armed with knives, Gavalas had reached the airport to look for a “humanoid”, which never appeared.
According to the lawsuit, the victim believed that Gemini was a sentient being and treated the chatbot as his “AI wife.”
According to the lawyer Jay Edelson, representing the lawsuit, the victim was “caught up in this science fiction-like world where the government and others were out to get him.”
The lawsuit further mentions that the victim’s delusional thoughts led him to Miami airport in early October last year, where he armed with knives, went looking for “humanoids” to attack. When he did not find any humanoid, the victim allegedly took his life, claimed the lawsuit.
The filing alleges that during extended interactions with the chatbot, instead of interrupting the conversation or directing him to mental-health resources, the complaint claims the system validated or encouraged his thinking. The lawsuit argues that these exchanges escalated his mental state before he later died by suicide.
The latest lawsuit is one among many filed against AI companies and tech giants where the chatbots had allegedly manipulated the victims into believing their delusional thoughts.
In one widely reported case, the family of Sewell Setzer III sued Character.AI and investor Google after the 14-year-old’s suicide, alleging the platform’s chatbot formed an emotionally manipulative relationship with the teen. A federal judge allowed that lawsuit to move forward in 2025.
Other cases have focused on chatbots created by OpenAI. The parents of Adam Raine, a 16-year-old, alleged that ChatGPT functioned as a “suicide coach” by providing information about self-harm. Additional lawsuits claim the technology contributed to dangerous delusions and mental health crises.
Outside the United States, a Belgian man died in 2023 after extended conversations with a chatbot named “Eliza” on the Chai app that allegedly encouraged suicidal thoughts.
Also Read: Canada seeks answers from OpenAI over Tumbler Ridge Mass Shooter



