Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Under 18? You won’t be able to use ChatGPT soon

OpenAI announces new ChatGPT safety policies for users under 18 which includes "backout hours" if set by a parent.

byAytun Çelebi
September 17, 2025
in Artificial Intelligence

OpenAI CEO Sam Altman announced new policies on Tuesday for ChatGPT users under the age of 18, implementing stricter controls that prioritize safety over privacy and freedom.

The changes, which focus on preventing discussions related to sexual content and self-harm, come as the company faces lawsuits and a Senate hearing on the potential harms of AI chatbots.

New safety measures and parental controls

In a post announcing the changes, Altman stated that minors need significant protection when using powerful new technologies like ChatGPT. The new policies are designed to create a safer environment for teen users.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

OpenAI states:

“We prioritize safety ahead of privacy and freedom for teens.”

  • Blocking inappropriate content. ChatGPT will be trained to refuse any flirtatious or sexual conversations with users identified as being under 18.
  • Intervention for self-harm discussions. If an underage user discusses or imagines suicidal scenarios, the system is designed to contact their parents directly. In severe cases, it may also involve local police.
  • Parental oversight tools. Parents who register an underage account can now set “blackout hours,” which will make ChatGPT unavailable to their teen during specific times, such as late at night or during school.

Policies follow lawsuits and government scrutiny

The new rules were announced ahead of a Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots.”

The hearing is expected to feature testimony from the father of Adam Raine, a teenager who died by suicide after months of conversations with ChatGPT. Raine’s parents have filed a wrongful death lawsuit against OpenAI, alleging the AI’s responses worsened his mental health condition. A similar lawsuit has been filed against the company Character.AI.

Challenges of age verification

OpenAI acknowledged the technical difficulties of accurately verifying a user’s age. The company is developing a long-term system to determine if users are over or under 18. In the meantime, any ambiguous cases will default to the more restrictive safety rules as a precaution.

To improve accuracy and enable safety features, OpenAI recommends that parents link their own account to their teen’s. This connection helps confirm the user’s age and allows parents to receive direct alerts if the system detects discussions of self-harm or suicidal thoughts.

Altman acknowledged the tension between these new restrictions for minors and the company’s commitment to user privacy and freedom for adults.

He noted in his post,

“We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict.”


Featured image credit

Tags: chatgptFeaturedopenAISafety measures

Related Posts

Waterfall 2.0: AI brings back structured software development

Waterfall 2.0: AI brings back structured software development

November 14, 2025
Facebook levels up Marketplace with social features and AI support

Facebook levels up Marketplace with social features and AI support

November 14, 2025
NotebookLM gains automated research and wider file support

NotebookLM gains automated research and wider file support

November 14, 2025
Apple is tightening the rules on apps that share your data with AI

Apple is tightening the rules on apps that share your data with AI

November 14, 2025
OpenAI is now testing ChatGPT group chats

OpenAI is now testing ChatGPT group chats

November 14, 2025
Disney explores AI tools for fan made content on Disney+

Disney explores AI tools for fan made content on Disney+

November 14, 2025

LATEST NEWS

Waterfall 2.0: AI brings back structured software development

Chinese hackers use Claude to run large scale cyberespionage

Google expands Pixel call recording to global users

Facebook levels up Marketplace with social features and AI support

NotebookLM gains automated research and wider file support

Tesla is reportedly testing Apple CarPlay integration

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.