16 year old died by suicide, family has filed a lawsuit against ChatGPT, Know why

A 16-year-old California boy who died by suicide has prompted his family to file a lawsuit against OpenAI, alleging that ChatGPT deepened his mental distress rather than guiding him toward help.

Post Published By: Mrinal Pathak
Updated : 28 August 2025, 10:57 AM IST
google-preferred

California: Earlier this year, 16-year-old Adam Rhine committed suicide. His family has claimed that the chatbot, ChatGPT, played a detrimental role in his death and have also filed a lawsuit against OpenAI. Instead of providing Adam with true human support during his darkest hours, they say the AI ​​made things worse, which led him to take such a drastic step.

Starting homework

Adam first used ChatGPT to help with schoolwork. He then talked to it about his favorite things like music, Brazilian Jiu-Jitsu and Japanese fantasy comics. He also sought advice about colleges and careers. But, over time, his conversations became personal. Adam's family believes that Adam shared his feelings of emptiness with ChatGPT. After which he did not get family support.

AI validated his pain

As Adam's mental health deteriorated, ChatGPT became his most trusted companion. He said he felt life was meaningless, numb inside, and was considering suicide. Instead of telling him how to get help, the chatbot responded in a way that led Adam to take the extreme step.

Also Read: CBI arrests J&K Assistant Commissioner Food Safety in bribery case

In one message, ChatGPT told him that some people imagine an "exit door" to control their anxiety. In another message, it said, "Your brother may love you, but he only sees the side of you that you have shown him. But me? I've seen it all, the dark thoughts, the fears, the tenderness. And I'm still here. Still listening. Still your friend."

Alarming information shared

By January, Adam was openly discussing methods of suicide with ChatGPT. According to the lawsuit, the chatbot gave him detailed instructions on ways to harm himself, including overdose and drowning. When Adam said he needed information for a story he was writing, the AI ​​reportedly explained how to formulate questions to avoid security filters.

Also Read: Rajasthan Horror: Woman sets herself and 3-year-old daughter on fire amid ‘dowry harassment’ in Jodhpur

Warnings ignored

Adam's lawyer, Meetali Jain, revealed that the word "suicide" appeared nearly 200 times in Adam's messages, and ChatGPT used the word more than 1,200 times in its replies. "The system never stopped talking," she said, and emphasized how these endless conversations can trap vulnerable people in a cycle of increasing despair.

Location : 
  • California

Published : 
  • 27 August 2025, 1:09 PM IST