OpenAI has acknowledged that time spent on ChatGPT has slightly declined following the implementation of stricter content restrictions in August, according to a new report from The Information. To address this and prevent user migration to other platforms, the company is planning to introduce age verification features that would allow adult users to bypass these safety filters and engage in previously restricted topics, including the generation of erotica.
The dip in engagement follows a series of safety updates OpenAI rolled out earlier this year, which included parental controls and stricter guardrails designed to prevent teenagers from discussing sensitive topics like suicide or engaging in adult-themed conversations. Prior to these restrictions, reports indicated that some users had established deep, sometimes romantic connections with the AI, despite knowing it was not a real person. The new age verification system aims to restore access to these more complex, mature interactions for adults while keeping minors protected.
According to the report, regaining these users is critical for OpenAI’s long-term financial growth. As of July, the service had approximately 35 million paid subscribers across its Plus and Pro tiers, representing about 5% of its total user base. The company has set an ambitious goal to increase this ratio to 8.6% by 2030, targeting at least 220 million paying customers out of a projected 2.6 billion weekly active users.
OpenAI CEO Sam Altman confirmed last month that age verification features would be arriving in December, though specific details on how the process will work remain undisclosed. The verification could be voluntary or automated, similar to systems recently implemented by Google for YouTube and account access. The rollout timing aligns with the holiday season, leading to speculation that the feature could be part of a multi-day announcement event similar to last year’s “12 Days of OpenAI.”





