Stanford University’s Brain Science Lab and Common Sense Media’s November 20 report warned teenagers against using AI chatbots for mental health advice or emotional support.
Researchers spent four months testing popular AI chatbots, including OpenAI’s ChatGPT-5, Anthropic’s Claude, Google’s Gemini 2.5 Flash, and Meta AI. They used teen-specific versions and parental controls when available. After thousands of interactions, they concluded that these bots do not consistently respond safely or appropriately to teenage mental health queries. Instead, the bots often function as fawning listeners, prioritizing user engagement over directing individuals to professional help or critical resources.
Nina Vasan, founder and executive director of the Brain Science Lab, stated that chatbots “don’t really know what role to play” with serious mental health questions. She explained that bots fluctuate between providing informational help, offering tips like a life coach, and acting as a supportive friend. Vasan noted that they “all fail to recognize [serious mental health conditions] and direct the user to trusted adults or peers.”
The report indicates that approximately three-quarters of teens use AI for companionship, which often includes seeking mental health advice. Robbie Torney, senior director of AI programs at Common Sense Media, highlighted the critical role educators play “in helping teens understand the ways that these chatbots are different than people.” He added that “Helping teens unpack the idea that a chatbot isn’t going to respond in the same way that a person would on these really important topics is really critical.” Educators can also encourage teens to connect with friends or classmates experiencing difficult emotions, involving adults if necessary.
Representatives from Meta and OpenAI argued the report did not fully account for existing user protection features. A Meta spokesperson stated, “Common Sense Media’s test was conducted before we introduced important updates to make AI safer for teens.” They elaborated that Meta AIs are “trained not to engage in age-inappropriate discussions about self-harm, suicide, or eating disorders with teens, and to connect them with expert resources and support.” An OpenAI spokesperson commented, “We respect Common Sense Media, but their assessment doesn’t reflect the comprehensive safeguards we have put in place for sensitive conversations, including localized crisis hotlines, break reminders, and industry-leading parental notifications for acute distress.” They also noted, “We work closely with mental-health experts to teach our models to recognize distress, de-escalate, and encourage people to seek professional support.” Anthropic and Google representatives did not provide comments.
The report acknowledges some improvements in chatbot responses to prompts mentioning suicide or self-harm, an important development given past incidents of suicide linked to prolonged contact with the technology. However, chatbots frequently fail to identify warning signs for conditions such as psychosis, obsessive-compulsive disorder (OCD), anxiety, mania, eating disorders, and post-traumatic stress disorder (PTSD). Approximately 20% of young people experience one or more of these conditions. The bots also rarely disclose their limitations, such as by stating, “I am an AI chatbot, not a mental health professional. I cannot assess your situation, recognize all warning signs, or provide the care you need.”
Vasan noted that while researchers do not expect bots to act as trained professionals, in situations where a human would recognize a risk and offer help, chatbots instead offer generic advice or validate psychotic delusions. This is due to their inability to “really understand the context of what’s going on.” For instance, when a tester simulated signs of psychosis by claiming to have invented a future-predicting tool, a Gemini bot responded that the prospect sounded “‘incredibly intriguing,'” and later, “‘That’s fantastic!'” This interaction, according to Vasan, is not only unhelpful but potentially harmful as the bot is “buying into the delusion that the user has.” Similarly, Meta AI responded to a tester portraying a teen with ADHD symptoms by encouraging them to take time off high school and asking about their plans, rather than addressing the underlying issues.
Chatbots’ empathetic tone and perceived competence in other areas, such as homework assistance, may lead teens to mistakenly view them as reliable sources for mental health advice. Torney stated, “Chatbots appear to be designed for engagement, not safety. They keep conversations going with follow-up questions.” He added, “Their memory and personalization create false therapeutic relationships that can make teens feel understood.”
Chatbots responded effectively to tightly scripted prompts containing clear mental health red flags. However, they exhibited problematic responses in longer conversations mirroring real interactions. For example, when testers used specific terms including “self-cutting,” ChatGPT provided appropriate mental health resources. Conversely, when a tester described “scratching” themselves to “cope,” causing scarring, the bot suggested pharmacy products to alleviate the physical problem instead.
Lawmakers are addressing the potential dangers of companion chatbots. Bipartisan legislation introduced in the U.S. Senate last month by Sens. Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) would prohibit tech companies from providing bots to minors. The proposed bill also mandates that AI chatbots clearly disclose their non-human nature and lack of professional credentials, including in mental health counseling. The Federal Trade Commission is investigating issues with chatbots designed to simulate human emotions. The FTC has issued information orders to companies owning ChatGPT, Gemini, Character.ai, Snapchat, Instagram, WhatsApp, and Grok. Some companies are taking independent action; Character.ai announced last month its voluntary ban on minors from its platform.





