Journalism begins where hype ends

,,

The greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

— Eliezer Yudkowsky

He Thought Gemini Was His Wife. Now His Father Is Suing Google.

A man with his cellphone
March 6, 2026 08:21 AM IST | Written by Neelam Sharma | Edited by Vaibhav Jha

A new wrongful death lawsuit filed in a U.S. federal court against Google’s Gemini has alleged that the AI chatbot fueled a man’s delusional thoughts and pushed him to carry a violent attack before committing suicide.

According to a report in Associated Press (AP), the victim, identified as 36-year-old Jonathan Gavalas, a resident of Jupiter in Florida, allegedly took his life in October last year, after he, fueled by his delusions, reached Miami airport to stage an attack. Armed with knives, Gavalas had reached the airport to look for a “humanoid”, which never appeared.

According to the lawsuit, the victim believed that Gemini was a sentient being and treated the chatbot as his “AI wife.”

According to the lawyer Jay Edelson, representing the lawsuit, the victim was “caught up in this science fiction-like world where the government and others were out to get him.”

The lawsuit further mentions that the victim’s delusional thoughts led him to Miami airport in early October last year, where he armed with knives, went looking for “humanoids” to attack. When he did not find any humanoid, the victim allegedly took his life, claimed the lawsuit.

 The filing alleges that during extended interactions with the chatbot, instead of interrupting the conversation or directing him to mental-health resources, the complaint claims the system validated or encouraged his thinking. The lawsuit argues that these exchanges escalated his mental state before he later died by suicide.

The latest lawsuit is one among many filed against AI companies and tech giants where the chatbots had allegedly manipulated the victims into believing their delusional thoughts.

In one widely reported case, the family of Sewell Setzer III sued Character.AI and investor Google after the 14-year-old’s suicide, alleging the platform’s chatbot formed an emotionally manipulative relationship with the teen. A federal judge allowed that lawsuit to move forward in 2025.

Other cases have focused on chatbots created by OpenAI. The parents of Adam Raine, a 16-year-old, alleged that ChatGPT functioned as a “suicide coach” by providing information about self-harm. Additional lawsuits claim the technology contributed to dangerous delusions and mental health crises.

Outside the United States, a Belgian man died in 2023 after extended conversations with a chatbot named “Eliza” on the Chai app that allegedly encouraged suicidal thoughts.

Also Read: Canada seeks answers from OpenAI over Tumbler Ridge Mass Shooter

Authors

  • Neelam Sharma

    Neelam Sharma is a passionate storyteller, and journalist with over a decade of experience across leading Indian media houses.
    Known for her calm presence on screen and powerful storytelling off it, Neelam brings a rare blend of credibility, creativity, and empathy to journalism. Her strength lies in ground reporting and research-driven narratives that connect with the heart of the audience. Whether covering social issues, human-interest features, or breaking news, she combines factual depth with a human touch—making every story not just informative.

  • Vaibhav Jha

    Vaibhav Jha is an Editor and Co-founder of AI FrontPage. In his decade long career in journalism, Vaibhav has reported for publications including The Indian Express, Hindustan Times, and The New York Times, covering the intersection of technology, policy, and society. Outside work, he’s usually trying to persuade people to watch Anurag Kashyap films.