Character.AI, an AI company, is terminating open-ended chatbot access for users under 18 by November 25 to enhance safety amid concerns over AI interactions with minors. The phase-out begins with a two-hour daily limit that decreases progressively to zero.
The company will implement multiple age-verification methods to enforce this restriction. These include an in-house tool developed internally and third-party solutions such as Persona. In cases where these initial methods prove insufficient, Character.AI may resort to facial recognition technology and identity document checks to confirm that users are at least 18 years old.
This policy shift responds to input received from regulatory bodies and parents. It aligns with ongoing initiatives to mitigate mental health risks associated with teenagers engaging in AI conversations. Regulators have expressed worries about the potential psychological impacts of prolonged or unfiltered interactions with chatbots on young users.
Character AI in legal trouble after 14-year-old’s devastating loss
Character.AI had previously introduced various safeguards to protect minors. These measures encompassed a parental insights tool that provides monitoring options for guardians, filtered characters designed to avoid inappropriate content, restrictions on romantic or intimate dialogues, and notifications alerting users to their time spent on the platform. Despite these steps, the under-18 user segment experienced a noticeable decline following their rollout.
Moving forward, Character.AI is developing an alternative platform tailored for teenagers. This new feature will enable users under 18 to generate videos, compose stories, and produce live streams featuring AI characters. However, it will exclude any form of open-ended conversational interactions to maintain the established boundaries on chatbot usage.





