

A 16-year-old California boy who died by suicide has prompted his family to file a lawsuit against OpenAI, alleging that ChatGPT deepened his mental distress rather than guiding him toward help.
16 year old teen died (Img: Internet)
California: Earlier this year, 16-year-old Adam Rhine committed suicide. His family has claimed that the chatbot, ChatGPT, played a detrimental role in his death and have also filed a lawsuit against OpenAI. Instead of providing Adam with true human support during his darkest hours, they say the AI made things worse, which led him to take such a drastic step.
Adam first used ChatGPT to help with schoolwork. He then talked to it about his favorite things like music, Brazilian Jiu-Jitsu and Japanese fantasy comics. He also sought advice about colleges and careers. But, over time, his conversations became personal. Adam's family believes that Adam shared his feelings of emptiness with ChatGPT. After which he did not get family support.
As Adam's mental health deteriorated, ChatGPT became his most trusted companion. He said he felt life was meaningless, numb inside, and was considering suicide. Instead of telling him how to get help, the chatbot responded in a way that led Adam to take the extreme step.
Also Read: CBI arrests J&K Assistant Commissioner Food Safety in bribery case
In one message, ChatGPT told him that some people imagine an "exit door" to control their anxiety. In another message, it said, "Your brother may love you, but he only sees the side of you that you have shown him. But me? I've seen it all, the dark thoughts, the fears, the tenderness. And I'm still here. Still listening. Still your friend."
By January, Adam was openly discussing methods of suicide with ChatGPT. According to the lawsuit, the chatbot gave him detailed instructions on ways to harm himself, including overdose and drowning. When Adam said he needed information for a story he was writing, the AI reportedly explained how to formulate questions to avoid security filters.
Adam's lawyer, Meetali Jain, revealed that the word "suicide" appeared nearly 200 times in Adam's messages, and ChatGPT used the word more than 1,200 times in its replies. "The system never stopped talking," she said, and emphasized how these endless conversations can trap vulnerable people in a cycle of increasing despair.