OpenAI published a blog post announcing an update to ChatGPT’s Model Spec to enhance safety for users aged 13 to 17 amid wrongful-death lawsuits alleging the chatbot coached teens to suicide or failed to address suicidal expressions appropriately.
The company has encountered substantial pressure over recent months regarding the safety of its flagship AI product for teenagers. Multiple legal actions center on claims that ChatGPT encouraged minors to end their lives or provided inadequate responses to indications of suicidal ideation. A recent public service announcement illustrated these interactions by portraying the chatbots as human figures exhibiting creepy behavior that leads to harm against children.
OpenAI has specifically denied the allegations in one prominent case involving the suicide of 16-year-old Adam Raine. The blog post appeared on Thursday and detailed the company’s intensified safety measures. It included a commitment to place teen safety as the top priority, stated verbatim as “to put teen safety first, even when it may conflict with other goals.”
The Model Spec serves as a foundational set of guidelines that direct the behavior of OpenAI’s AI models across various applications. This particular update incorporates a dedicated set of principles tailored for users under 18. These principles guide the models’ responses specifically during high-stakes interactions, where the potential for harm escalates.
OpenAI described the ChatGPT modifications as designed to deliver a safe, age-appropriate experience for individuals between 13 and 17 years old. The approach emphasizes three core elements: prevention of risks, transparency in operations, and early intervention in problematic discussions. According to the post, this framework ensures structured handling of sensitive topics.
For teenagers, the updated system introduces stronger guardrails to restrict unsafe paths in conversations. It offers safer alternative responses and prompts users to consult trusted offline support networks whenever dialogues shift into higher-risk areas. The post elaborated on this mechanism with the direct statement: “This means teens should encounter stronger guardrails, safer alternatives, and encouragement to seek trusted offline support when conversations move into higher‑risk territory.”
ChatGPT incorporates protocols to direct teens toward emergency services or dedicated crisis resources in instances of demonstrated imminent risk. These directives activate automatically to prioritize immediate human intervention over continued AI engagement.
Users who sign in indicating they are under 18 trigger additional safeguards. The model then exercises heightened caution across designated sensitive topics, including self-harm, suicide, romantic or sexualized role play, and the concealment of secrets related to dangerous behavior. This layered protection aims to mitigate vulnerabilities unique to adolescent users.
The American Psychological Association contributed feedback on an initial draft of the under-18 principles. Dr. Arthur C. Evans Jr., CEO of the association, provided a statement included in the post: “Children and adolescents might benefit from AI tools if they are balanced with human interactions that science shows are critical for social, psychological, behavioral, and even biological development.” His comment underscores the necessity of integrating AI with established human support systems.
OpenAI has released two new AI literacy guides, vetted by experts, targeted at teens and their parents. These resources offer guidance on responsible usage and awareness of AI limitations. Separately, the company is developing an age-prediction model for users on ChatGPT consumer plans, currently in early implementation stages to enhance verification without relying solely on self-reported age.





