Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Under 18? You won’t be able to use ChatGPT soon

OpenAI announces new ChatGPT safety policies for users under 18 which includes "backout hours" if set by a parent.

byAytun Çelebi
September 17, 2025
in Artificial Intelligence

OpenAI CEO Sam Altman announced new policies on Tuesday for ChatGPT users under the age of 18, implementing stricter controls that prioritize safety over privacy and freedom.

The changes, which focus on preventing discussions related to sexual content and self-harm, come as the company faces lawsuits and a Senate hearing on the potential harms of AI chatbots.

New safety measures and parental controls

In a post announcing the changes, Altman stated that minors need significant protection when using powerful new technologies like ChatGPT. The new policies are designed to create a safer environment for teen users.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

OpenAI states:

“We prioritize safety ahead of privacy and freedom for teens.”

  • Blocking inappropriate content. ChatGPT will be trained to refuse any flirtatious or sexual conversations with users identified as being under 18.
  • Intervention for self-harm discussions. If an underage user discusses or imagines suicidal scenarios, the system is designed to contact their parents directly. In severe cases, it may also involve local police.
  • Parental oversight tools. Parents who register an underage account can now set “blackout hours,” which will make ChatGPT unavailable to their teen during specific times, such as late at night or during school.

Policies follow lawsuits and government scrutiny

The new rules were announced ahead of a Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots.”

The hearing is expected to feature testimony from the father of Adam Raine, a teenager who died by suicide after months of conversations with ChatGPT. Raine’s parents have filed a wrongful death lawsuit against OpenAI, alleging the AI’s responses worsened his mental health condition. A similar lawsuit has been filed against the company Character.AI.

Challenges of age verification

OpenAI acknowledged the technical difficulties of accurately verifying a user’s age. The company is developing a long-term system to determine if users are over or under 18. In the meantime, any ambiguous cases will default to the more restrictive safety rules as a precaution.

To improve accuracy and enable safety features, OpenAI recommends that parents link their own account to their teen’s. This connection helps confirm the user’s age and allows parents to receive direct alerts if the system detects discussions of self-harm or suicidal thoughts.

Altman acknowledged the tension between these new restrictions for minors and the company’s commitment to user privacy and freedom for adults.

He noted in his post,

“We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict.”


Featured image credit

Tags: chatgptFeaturedopenAISafety measures

Related Posts

ChatGPT reaches 800m weekly active users

ChatGPT reaches 800m weekly active users

October 7, 2025
Claude Sonnet 4.5 flags its own AI safety tests

Claude Sonnet 4.5 flags its own AI safety tests

October 7, 2025
Ethical hackers invited: Google launches Gemini AI bug bounty

Ethical hackers invited: Google launches Gemini AI bug bounty

October 7, 2025
Evernote adds OpenAI-powered AI assistant

Evernote adds OpenAI-powered AI assistant

October 7, 2025
xAI announces Grokipedia, an AI Wikipedia competitor

xAI announces Grokipedia, an AI Wikipedia competitor

October 7, 2025
OpenAI Sora 2 requires opt-in for Nintendo, Pokémon content

OpenAI Sora 2 requires opt-in for Nintendo, Pokémon content

October 6, 2025

LATEST NEWS

Shinyhunters extorts Red Hat over stolen CER data

CPAP breach exposes data of 90k military members

Windows 11 test build blocks local account bypass

Excel gets AI agent mode for automated data tasks

What is new at iOS 26.1 beta 2?

ChatGPT reaches 800m weekly active users

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.