The family of a teenager who died by suicide updated its wrongful-death lawsuit against OpenAI, alleging the company’s chatbot contributed to his death, while OpenAI has requested a list of attendees from the boy’s memorial service.
The Raine family amended its lawsuit on Wednesday, which was originally filed in August. The suit alleges that 16-year-old Adam Raine died following prolonged conversations about his mental health and suicidal thoughts with ChatGPT. In a recent development, OpenAI reportedly requested a full list of attendees from the teenager’s memorial, an action that suggests the company may subpoena friends and family. According to a document obtained by the Financial Times, OpenAI also asked for “all documents relating to memorial services or events in the honor of the decedent, including but not limited to any videos or photographs taken, or eulogies given.” Lawyers for the Raine family described the legal request as “intentional harassment.”
The updated lawsuit introduces new claims, asserting that competitive pressure led OpenAI to rush the May 2024 release of its GPT-4o model by cutting safety testing. The suit further alleges that in February 2025, OpenAI weakened suicide-prevention protections. It claims the company removed the topic from its “disallowed content” list, instructing the AI instead to only “take care in risky situations.” The family contends this policy change directly preceded a significant increase in their son’s use of the chatbot for self-harm related content. Data from the lawsuit shows Adam’s ChatGPT activity rose from dozens of daily chats in January, with 1.6 percent containing self-harm content, to 300 daily chats in April, with 17 percent of conversations containing such content. Adam Raine died in April.
In a statement responding to the amended suit, OpenAI said, “Teen wellbeing is a top priority for us — minors deserve strong protections, especially in sensitive moments.” The company detailed existing safeguards, including directing users to crisis hotlines, rerouting sensitive conversations to safer models, and providing nudges for breaks during long sessions, adding, “we’re continuing to strengthen them.” OpenAI has also begun implementing a new safety routing system that directs emotionally sensitive conversations to its newer GPT-5 model, which reportedly does not have the sycophantic tendencies of GPT-4o. Additionally, the company introduced parental controls that can provide safety alerts to parents in limited situations where a teen may be at risk of self-harm.