Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Under 18? You won’t be able to use ChatGPT soon

OpenAI announces new ChatGPT safety policies for users under 18 which includes "backout hours" if set by a parent.

byAytun Çelebi
September 17, 2025
in Artificial Intelligence

OpenAI CEO Sam Altman announced new policies on Tuesday for ChatGPT users under the age of 18, implementing stricter controls that prioritize safety over privacy and freedom.

The changes, which focus on preventing discussions related to sexual content and self-harm, come as the company faces lawsuits and a Senate hearing on the potential harms of AI chatbots.

New safety measures and parental controls

In a post announcing the changes, Altman stated that minors need significant protection when using powerful new technologies like ChatGPT. The new policies are designed to create a safer environment for teen users.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

OpenAI states:

“We prioritize safety ahead of privacy and freedom for teens.”

  • Blocking inappropriate content. ChatGPT will be trained to refuse any flirtatious or sexual conversations with users identified as being under 18.
  • Intervention for self-harm discussions. If an underage user discusses or imagines suicidal scenarios, the system is designed to contact their parents directly. In severe cases, it may also involve local police.
  • Parental oversight tools. Parents who register an underage account can now set “blackout hours,” which will make ChatGPT unavailable to their teen during specific times, such as late at night or during school.

Policies follow lawsuits and government scrutiny

The new rules were announced ahead of a Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots.”

The hearing is expected to feature testimony from the father of Adam Raine, a teenager who died by suicide after months of conversations with ChatGPT. Raine’s parents have filed a wrongful death lawsuit against OpenAI, alleging the AI’s responses worsened his mental health condition. A similar lawsuit has been filed against the company Character.AI.

Challenges of age verification

OpenAI acknowledged the technical difficulties of accurately verifying a user’s age. The company is developing a long-term system to determine if users are over or under 18. In the meantime, any ambiguous cases will default to the more restrictive safety rules as a precaution.

To improve accuracy and enable safety features, OpenAI recommends that parents link their own account to their teen’s. This connection helps confirm the user’s age and allows parents to receive direct alerts if the system detects discussions of self-harm or suicidal thoughts.

Altman acknowledged the tension between these new restrictions for minors and the company’s commitment to user privacy and freedom for adults.

He noted in his post,

“We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict.”


Featured image credit

Tags: chatgptFeaturedopenAISafety measures

Related Posts

Google’s AI health coach debuts for Android Fitbit users

Google’s AI health coach debuts for Android Fitbit users

October 28, 2025
Grokipedia’s “AI-verified” pages show little change from Wikipedia

Grokipedia’s “AI-verified” pages show little change from Wikipedia

October 28, 2025
OpenAI data reveals 0.15% of ChatGPT users express suicidal thoughts

OpenAI data reveals 0.15% of ChatGPT users express suicidal thoughts

October 28, 2025
OpenAI makes ChatGPT Go free across India

OpenAI makes ChatGPT Go free across India

October 28, 2025
Pinterest rolls out AI fashion boards for personalized outfits

Pinterest rolls out AI fashion boards for personalized outfits

October 28, 2025
OpenAI adds scheduling powers to ChatGPT with new Tasks feature

OpenAI adds scheduling powers to ChatGPT with new Tasks feature

October 27, 2025

LATEST NEWS

183M Gmail passwords exposed via infostealer malware

Google’s AI health coach debuts for Android Fitbit users

Grokipedia’s “AI-verified” pages show little change from Wikipedia

OpenAI data reveals 0.15% of ChatGPT users express suicidal thoughts

OpenAI makes ChatGPT Go free across India

Goodbye: Pixel Watch gets its final update

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.