OpenAI CEO Sam Altman announced new policies on Tuesday for ChatGPT users under the age of 18, implementing stricter controls that prioritize safety over privacy and freedom.
The changes, which focus on preventing discussions related to sexual content and self-harm, come as the company faces lawsuits and a Senate hearing on the potential harms of AI chatbots.
New safety measures and parental controls
In a post announcing the changes, Altman stated that minors need significant protection when using powerful new technologies like ChatGPT. The new policies are designed to create a safer environment for teen users.
OpenAI states:
“We prioritize safety ahead of privacy and freedom for teens.”
- Blocking inappropriate content. ChatGPT will be trained to refuse any flirtatious or sexual conversations with users identified as being under 18.
- Intervention for self-harm discussions. If an underage user discusses or imagines suicidal scenarios, the system is designed to contact their parents directly. In severe cases, it may also involve local police.
- Parental oversight tools. Parents who register an underage account can now set “blackout hours,” which will make ChatGPT unavailable to their teen during specific times, such as late at night or during school.
Policies follow lawsuits and government scrutiny
The new rules were announced ahead of a Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots.”
The hearing is expected to feature testimony from the father of Adam Raine, a teenager who died by suicide after months of conversations with ChatGPT. Raine’s parents have filed a wrongful death lawsuit against OpenAI, alleging the AI’s responses worsened his mental health condition. A similar lawsuit has been filed against the company Character.AI.
Challenges of age verification
OpenAI acknowledged the technical difficulties of accurately verifying a user’s age. The company is developing a long-term system to determine if users are over or under 18. In the meantime, any ambiguous cases will default to the more restrictive safety rules as a precaution.
To improve accuracy and enable safety features, OpenAI recommends that parents link their own account to their teen’s. This connection helps confirm the user’s age and allows parents to receive direct alerts if the system detects discussions of self-harm or suicidal thoughts.
Altman acknowledged the tension between these new restrictions for minors and the company’s commitment to user privacy and freedom for adults.
He noted in his post,
“We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict.”