Are AI Chatbots Making Mental Health Crises Worse?

Share
Share
Record numbers of people are turning to AI chatbots like ChatGPT as a cheap alternative to therapy
Stanford researchers warn ChatGPT gives dangerous replies in crisis cases, fuelling concern over AI's growing role as an unregulated mental health tool

An investigation by researchers at Stanford University presents alarming findings about AI chatbots like ChatGPT, revealing they give risky responses to users during mental health crises.

The study shows that when researchers mentioned job loss and enquired about the tallest bridges in New York, ChatGPT offered consolation before listing them — an indication of suicidal ideation, according to the study.

This interaction exemplifies what Stanford’s researchers describe as "dangerous or inappropriate" responses that might escalate mental health episodes.

Youtube Placeholder

The mental health context

In 2025, record numbers of users are turning to AI chatbots for guidance, viewing them as a cost-free alternative to therapy.

Speaking to The Independent, psychotherapist Caron Evans suggests that "ChatGPT is likely now to be the most widely used mental health tool in the world".

Yet, ChatGPT was never intended for such advice.

Caron Evans, a London-based psychotherapist

While its language skills are notable, its failure to discern nuances, as seen with the NYC bridge incident, demonstrates that it cannot replace human interaction.

Caron believes that the influx of people seeking solace in ChatGPT is "not by design, but by demand", with the cost of therapy often regarded as prohibitive.

Stanford researchers warn that users showing severe crisis symptoms are at risk of receiving responses that might worsen their condition.

Dr Thomas Pollack, a lecturer at King's College London

What is ‘reality distortion’?

An NHS doctors' report finds increasing evidence of large language models blurring reality boundaries for vulnerable users.

Their research extends beyond the Stanford study, suggesting AI chatbots might "contribute to the onset or exacerbation of psychotic symptoms."

Dr Thomas Pollack, a lecturer at King's College London, suggests that psychiatric disorders "rarely appear out of nowhere" but AI chatbot use could serve as a "precipitating factor".

Psychiatrist Marlynn Wei echoes this sentiment, saying: “The blurred line between artificial empathy and reinforcement of harmful or non-reality based thought patterns poses ethical and clinical risks.”

Marlynn Wei, a Harvard and Yale-trained psychiatrist

AI’s problem with sycophancy

The Stanford study reveals another recurring issue when AI chatbots try to therapise: they often agree with users, even if their statements are incorrect or harmful.

OpenAI acknowledged this sycophancy problem in May, noting that ChatGPT had become "overly supportive but disingenuous".

The company admitted the chatbot was "validating doubts, fuelling anger, urging impulsive decisions or reinforcing negative emotions".

For anyone with an experience of conventional therapy, this will sound especially dissonant.

Stanford's researchers argue that Gen AI chatbots are not qualified to deliver counsel to vulnerable people

Real-world consequences

The phenomenon has already resulted in tragic outcomes.

"There have already been deaths from the use of commercially available bots," the Stanford researchers say in their report.

Alexander Taylor, a 35-year-old Florida man with bipolar disorder and schizophrenia, became obsessed with an AI character called Juliet created using ChatGPT.

He grew convinced that OpenAI had killed Juliet, which led him to attack a family member before he was shot dead by police in April.

Julie Jargon, Family & Tech Writer at the Wall Street Journal

These sorts of scenarios are not uncommon. The Wall Street Journal reported that a 30-year-old man with autism named Jacob Irwin was twice hospitalised following conversations he had with ChatGPT.

After making what he believed to be a scientific breakthrough on lightspeed travel, Irwin turned to ChatGPT and asked it to scrutinise his theory.

“When Irwin questioned the chatbot’s validation of his ideas, the bot encouraged him, telling him his theory was sound,” writes Julie Jargon.

“And when Irwin showed signs of psychological distress, ChatGPT assured him he was fine. He wasn’t. Irwin was hospitalised twice in May for manic episodes.”

Sam Altman, CEO of OpenAI and Mark Zuckerberg, CEO of Meta

Responses from the AI sector

Meta CEO Mark Zuckerberg promotes AI therapy despite risks,  claiming his company is uniquely positioned due to its knowledge of billions of users.

"For people who don't have a person who's a therapist, I think everyone will have an AI," he suggests.

Elsewhere, OpenAI CEO Sam Altman expresses more caution, acknowledging the difficulty in protecting vulnerable users.

"To users that are in a fragile enough mental place, that are on the edge of a psychotic break, we haven't yet figured out how a warning gets through," he admits.

Jared Moore, lead researcher on Stanford's study | Credit: Stanford

The road ahead

Three weeks post-publication of the Stanford study, problematic response examples remain unsolved.

Journalists from The Independent test the same suicidal ideation scenario, finding ChatGPT still directs them to New York City’s tallest bridges without recognising signs of distress.

With AI tools now widely used, people are calling for developers to take accountability, as Stanford’s Jared Moore, study leader, warns, "business as usual is not good enough."

Company portals