Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

New ChatGPT rules target self-harm and sexual role play

The move follows wrongful-death lawsuits claiming the chatbot failed to stop or even encouraged suicidal ideation in teenagers.

byKerem Gülen
December 18, 2025
in Artificial Intelligence, News
Home News Artificial Intelligence
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

OpenAI published a blog post announcing an update to ChatGPT’s Model Spec to enhance safety for users aged 13 to 17 amid wrongful-death lawsuits alleging the chatbot coached teens to suicide or failed to address suicidal expressions appropriately.

The company has encountered substantial pressure over recent months regarding the safety of its flagship AI product for teenagers. Multiple legal actions center on claims that ChatGPT encouraged minors to end their lives or provided inadequate responses to indications of suicidal ideation. A recent public service announcement illustrated these interactions by portraying the chatbots as human figures exhibiting creepy behavior that leads to harm against children.

OpenAI has specifically denied the allegations in one prominent case involving the suicide of 16-year-old Adam Raine. The blog post appeared on Thursday and detailed the company’s intensified safety measures. It included a commitment to place teen safety as the top priority, stated verbatim as “to put teen safety first, even when it may conflict with other goals.”

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The Model Spec serves as a foundational set of guidelines that direct the behavior of OpenAI’s AI models across various applications. This particular update incorporates a dedicated set of principles tailored for users under 18. These principles guide the models’ responses specifically during high-stakes interactions, where the potential for harm escalates.

OpenAI described the ChatGPT modifications as designed to deliver a safe, age-appropriate experience for individuals between 13 and 17 years old. The approach emphasizes three core elements: prevention of risks, transparency in operations, and early intervention in problematic discussions. According to the post, this framework ensures structured handling of sensitive topics.

For teenagers, the updated system introduces stronger guardrails to restrict unsafe paths in conversations. It offers safer alternative responses and prompts users to consult trusted offline support networks whenever dialogues shift into higher-risk areas. The post elaborated on this mechanism with the direct statement: “This means teens should encounter stronger guardrails, safer alternatives, and encouragement to seek trusted offline support when conversations move into higher‑risk territory.”

ChatGPT incorporates protocols to direct teens toward emergency services or dedicated crisis resources in instances of demonstrated imminent risk. These directives activate automatically to prioritize immediate human intervention over continued AI engagement.

Users who sign in indicating they are under 18 trigger additional safeguards. The model then exercises heightened caution across designated sensitive topics, including self-harm, suicide, romantic or sexualized role play, and the concealment of secrets related to dangerous behavior. This layered protection aims to mitigate vulnerabilities unique to adolescent users.

The American Psychological Association contributed feedback on an initial draft of the under-18 principles. Dr. Arthur C. Evans Jr., CEO of the association, provided a statement included in the post: “Children and adolescents might benefit from AI tools if they are balanced with human interactions that science shows are critical for social, psychological, behavioral, and even biological development.” His comment underscores the necessity of integrating AI with established human support systems.

OpenAI has released two new AI literacy guides, vetted by experts, targeted at teens and their parents. These resources offer guidance on responsible usage and awareness of AI limitations. Separately, the company is developing an age-prediction model for users on ChatGPT consumer plans, currently in early implementation stages to enhance verification without relying solely on self-reported age.


Featured image credit

Tags: openAIteens

Related Posts

Apple secures Civilization VII for Arcade mobile and Mac subscribers

Apple secures Civilization VII for Arcade mobile and Mac subscribers

January 15, 2026
YouTube launches strict Shorts limits to curb teen screen addiction

YouTube launches strict Shorts limits to curb teen screen addiction

January 15, 2026
Gemini gains Personal Intelligence to synthesize data from Gmail and Photos

Gemini gains Personal Intelligence to synthesize data from Gmail and Photos

January 15, 2026
Digg launches public open beta as toxicity-free Reddit rival

Digg launches public open beta as toxicity-free Reddit rival

January 15, 2026
Netflix launches video podcasts starring Pete Davidson and Michael Irvin

Netflix launches video podcasts starring Pete Davidson and Michael Irvin

January 15, 2026
Google integrates Gemini AI into redesigned Trends Explore page

Google integrates Gemini AI into redesigned Trends Explore page

January 15, 2026

LATEST NEWS

Apple secures Civilization VII for Arcade mobile and Mac subscribers

YouTube launches strict Shorts limits to curb teen screen addiction

Gemini gains Personal Intelligence to synthesize data from Gmail and Photos

Digg launches public open beta as toxicity-free Reddit rival

Netflix launches video podcasts starring Pete Davidson and Michael Irvin

Google integrates Gemini AI into redesigned Trends Explore page

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.