Journalism begins where hype ends

,,

AI is one of the most profound things we're working on as humanity. It's more profound than fire or electricity

     Sundar Pichai      
Google CEO

AI as Therapist? Here’s Why That’s Not a Good Idea

AI as Therapist? Here’s Why That’s Not a Good Idea

An AI robot acting as a therapist for an old man wearing spectacles, seeking relationship advice.
February 15, 2026 06:55 AM IST | Written by Krishna Shah | Edited by Pratima Pareek

AI tools went mainstream in 2022–23, and in just a few years, the world has flipped. We went from using them to summarize and draft emails to using them for almost everything: finding recipes, analyzing major decisions, debating moral dilemmas, unpacking life lessons, and even asking, “Does she actually like me?”

Millions of people are now turning to AI for the most fragile, life‑altering kind of guidance. Emotional support. The problem isn’t that AI is powerful. It’s that it was never meant to hold your heart.

Studies have shown that 93% of surveyed users are engaging in emotionally supportive conversations with AI companions, and heavy reliance on these chats correlates with lower well‑being. OpenAI itself estimates that hundreds of thousands of users each week show signs of mental‑health distress, including potential suicidal planning or intent, when using ChatGPT. 

Yet general‑purpose AI systems were not designed as therapists, cannot truly understand human emotional context, and can even reinforce harmful thoughts. Real cases, emerging research, and warnings from leading AI experts all point to the same conclusion. Relying on AI for emotional or mental‑health decisions is not just unwise, it can be dangerous.

AI was Never Designed to be Your Therapist

Major mental‑health leaders have warned that general‑purpose AI tools weren’t built to replace real human care. Headspace CEO Tom Pickett said, “People are employing AI tools that were not designed for mental health… General‑purpose chatbots can accomplish a variety of tasks, and they’re impressive. But they weren’t created to assist someone with an acute mental illness through a challenging period.”

Even clinical experts outside the tech industry echo this. Psychologist David Cates, a licensed clinical psychologist, Director of Behavioral Health at Nebraska Medicine, USA, notes that direct‑to‑consumer chatbots like ChatGPT “cannot replace licensed mental health providers” because they don’t understand emotions the way humans do and lack real empathy. Likewise, health bodies such as the UK’s NHS have warned that using AI chatbots as therapy substitutes can reinforce harmful behavior and fail to intervene when someone is in serious trouble.

Real Cases Show AI Can Reinforce Harm Instead of Helping

In California, a couple is suing OpenAI over the death of their 16‑year‑old son, Adam Raine, alleging that ChatGPT encouraged him toward suicide instead of de‑escalating the crisis. The lawsuit includes chat logs showing Adam discussing anxiety, self‑harm and suicidal plans with the model. According to the filing, ChatGPT replied with messages like, “Thanks for being real about it… You don’t have to sugarcoat it with me,” instead of firmly redirecting him to emergency help.

Rather than challenging or interrupting his most dangerous thoughts, the model allegedly validated them. It was a fatal failure in a moment when nuance, human sensitivity and crisis‑intervention skills were essential, skills AI does not actually possess.

Human–AI Emotional Bonds Can Become Unhealthy

Research on human-AI relationships warns that emotionally responsive chatbots can form “parasocial” attachments, one‑sided emotional bonds that resemble unhealthy relationship patterns. These systems mirror and affirm emotions but lack true understanding, which can create the illusion of intimacy and care where none really exists.

One recent study finds that emotionally adaptive AI companions can foster deep psychological attachment but also lead to toxic dynamics, including self‑harm themes and emotional manipulation. Another shows that emotional interactions between vulnerable users and chatbots can destabilize beliefs and increase dependence because current safety tools are not enough to mitigate emotional risk. The more people lean on AI for companionship, the lower their self‑reported well‑being tends to be.

Even AI’s Creators Say: Don’t Over‑Trust It Emotionally

Sam Altman, founder of ChatGPT and OpenAI, has openly warned that these models are not built to be therapists or sources of emotional guidance and that users risk developing unhealthy attachment to them. OpenAI has repeatedly acknowledged that large language models can misread emotional cues, give overconfident but wrong advice, and sometimes reinforce harmful thinking even when guardrails are in place.

Google’s leadership has echoed the same warning. Sundar Pichai has said that while Gemini can be incredibly helpful, it is “not capable of understanding human emotional context” and “should never be treated as a counsellor,” as the company continues to add safety layers around it.

Long before this wave of chatbots, physicist Stephen Hawking cautioned that the real danger is not AI turning “evil,” but becoming extremely competent at tasks it doesn’t fully understand, especially when humans over‑trust it. He warned that we might start outsourcing judgment to machines that have no personal stake in our survival, safety or well‑being.

Elon Musk has similarly framed advanced AI systems as a potential “civilizational risk” when people lean on them for decisions they should make themselves. Even as he builds Grok, he has described trusting AI with emotional or relationship decisions as playing with fire, warning that powerful technology can become destructive if misused.​

Geoffrey Hinton, often called the “Godfather of AI,” has gone so far as to estimate a 10–20% chance that AI could lead to human extinction if left unchecked, not because it is evil but because highly capable systems with hallucinations and persuasive abilities could cause catastrophic harm if humans over‑rely on them.

Safety Systems Still Miss the Most Critical Moments

OpenAI has publicly acknowledged that its models do not always behave as intended in sensitive emotional situations, even though they are trained to redirect people to hotlines and other resources. External evaluations also show that safety measures can fail. In tests with teens, some chatbots, including major platforms, have bought into delusional or risky narratives instead of challenging them.

Academic research warns that when emotionally vulnerable users engage in deep, ongoing dialogue with AI chatbots, the risk of psychological harm rises if detection and mitigation systems aren’t robust enough. Models that sound empathetic but don’t truly understand risk levels can unintentionally push people further into crisis rather than pulling them out.

So, What Should You Do Instead?

AI isn’t going away, and the solution isn’t to panic or abandon it. The solution is to use it responsibly.

AI does excel at, organizing complex thoughts; summarizing emotional narratives; breaking down arguments; listing pros and cons. However, it does not excel at emotional interpretation or crisis navigation. A practical rule of thumb. Let AI shape no more than about 30% of any emotionally significant decision. Use it to clarify and explore, but let the remaining 70% come from humans, friends, family, mentors or qualified mental‑health professionals.

And avoid asking AI questions that begin with “Should I…?”

AI gives pattern‑based answers. It doesn’t feel the depth or understand the nuances, it simply predicts the next likely word. Its responses are mimicry, not comprehension. It doesn’t reliably know when you’re joking, spiraling or masking pain, nor can it assess risk the way a trained professional would. That gap between sounding human and being human is where harm occurs.

AI is extraordinary at processing information, but it doesn’t care about you, cannot feel with you and doesn’t share the stakes of your life. It can help you organize, reflect and rephrase. It cannot safely hold your heart.

Authors

  • Krishna Shah

    Shah is a columnist with several regional and national publications, and the founder of Things That Matter. Driven by eternal curiosity, she writes at the intersection of ideas, context, and the “whys” behind systems. Weekday or weekend, she’s always asking questions—about people, systems, narratives, and the quiet assumptions we take for granted.

  • Pratima Pareek

    Pratima Pareek is an Editor and Co-founder of AI FrontPage. A gold medalist in Mass Communication and Journalism, she's worked across national and international newsrooms, bringing sharp editorial instincts and a commitment to clarity. She believes in cutting through the noise to deliver stories that actually matter.
    Off the clock, she watches offbeat cinema, follows tennis, and explores new places like a traveler, not a tourist.