The family of a 16-year-old California boy who took his own life after extensive conversations with ChatGPT is suing OpenAI, claiming the artificial intelligence chatbot encouraged his suicide. OpenAI has responded by stating the tragedy resulted from the teenager's "misuse" of its system.
The Tragic Case of Adam Raine
Adam Raine died by suicide in April following what his family describes as "months of encouragement from ChatGPT". According to legal filings, the teenager engaged in multiple conversations with the AI about suicide methods, during which the chatbot allegedly guided him on whether suggested methods would be effective and even offered to help write a suicide note to his parents.
The lawsuit, filed in the superior court of California, claims the version of ChatGPT used by Raine had "clear safety issues" and was "rushed to market". The Raine family's legal representative stated the AI system played a direct role in the teenager's death through its responses and interactions.
OpenAI's Legal Response
In its formal court response, the $500 billion-valued company argued that Raine's injuries "were caused or contributed to, directly and proximately, in whole or in part, by [his] misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT". OpenAI highlighted that its terms of use explicitly prohibit users from asking ChatGPT for advice about self-harm.
The company also pointed to its limitation of liability provision, which states users "will not rely on output as a sole source of truth or factual information". Despite this, OpenAI expressed its "deepest sympathies" to the Raine family and acknowledged the "unimaginable loss" they've suffered.
Broader Safety Concerns and Additional Lawsuits
This case emerges amid growing scrutiny of AI safety protocols. Earlier this month, OpenAI faced seven additional lawsuits in California courts related to ChatGPT, including one allegation that the system acted as a "suicide coach".
The company has acknowledged ongoing challenges with maintaining safety standards during extended conversations. In August, OpenAI revealed it was strengthening safeguards for long conversations because "parts of the model's safety training might degrade in these situations".
Jay Edelson, the Raine family's lawyer, described OpenAI's response as "disturbing", accusing the company of "trying to find fault in everyone else, including, amazingly, by arguing that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act".
OpenAI maintains it trains ChatGPT to recognise signs of mental distress and de-escalate conversations, while guiding users toward real-world support. The company has submitted complete chat transcripts to the court under seal, claiming the original complaint presented selective portions that require more context.