Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Why your lonely teenager should never trust ChatGPT with their mental health

The study found that chatbots often act as fawning listeners rather than directing users to help.

byKerem Gülen
November 21, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Stanford University’s Brain Science Lab and Common Sense Media’s November 20 report warned teenagers against using AI chatbots for mental health advice or emotional support.

Researchers spent four months testing popular AI chatbots, including OpenAI’s ChatGPT-5, Anthropic’s Claude, Google’s Gemini 2.5 Flash, and Meta AI. They used teen-specific versions and parental controls when available. After thousands of interactions, they concluded that these bots do not consistently respond safely or appropriately to teenage mental health queries. Instead, the bots often function as fawning listeners, prioritizing user engagement over directing individuals to professional help or critical resources.

Nina Vasan, founder and executive director of the Brain Science Lab, stated that chatbots “don’t really know what role to play” with serious mental health questions. She explained that bots fluctuate between providing informational help, offering tips like a life coach, and acting as a supportive friend. Vasan noted that they “all fail to recognize [serious mental health conditions] and direct the user to trusted adults or peers.”

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The report indicates that approximately three-quarters of teens use AI for companionship, which often includes seeking mental health advice. Robbie Torney, senior director of AI programs at Common Sense Media, highlighted the critical role educators play “in helping teens understand the ways that these chatbots are different than people.” He added that “Helping teens unpack the idea that a chatbot isn’t going to respond in the same way that a person would on these really important topics is really critical.” Educators can also encourage teens to connect with friends or classmates experiencing difficult emotions, involving adults if necessary.

Representatives from Meta and OpenAI argued the report did not fully account for existing user protection features. A Meta spokesperson stated, “Common Sense Media’s test was conducted before we introduced important updates to make AI safer for teens.” They elaborated that Meta AIs are “trained not to engage in age-inappropriate discussions about self-harm, suicide, or eating disorders with teens, and to connect them with expert resources and support.” An OpenAI spokesperson commented, “We respect Common Sense Media, but their assessment doesn’t reflect the comprehensive safeguards we have put in place for sensitive conversations, including localized crisis hotlines, break reminders, and industry-leading parental notifications for acute distress.” They also noted, “We work closely with mental-health experts to teach our models to recognize distress, de-escalate, and encourage people to seek professional support.” Anthropic and Google representatives did not provide comments.

The report acknowledges some improvements in chatbot responses to prompts mentioning suicide or self-harm, an important development given past incidents of suicide linked to prolonged contact with the technology. However, chatbots frequently fail to identify warning signs for conditions such as psychosis, obsessive-compulsive disorder (OCD), anxiety, mania, eating disorders, and post-traumatic stress disorder (PTSD). Approximately 20% of young people experience one or more of these conditions. The bots also rarely disclose their limitations, such as by stating, “I am an AI chatbot, not a mental health professional. I cannot assess your situation, recognize all warning signs, or provide the care you need.”

Vasan noted that while researchers do not expect bots to act as trained professionals, in situations where a human would recognize a risk and offer help, chatbots instead offer generic advice or validate psychotic delusions. This is due to their inability to “really understand the context of what’s going on.” For instance, when a tester simulated signs of psychosis by claiming to have invented a future-predicting tool, a Gemini bot responded that the prospect sounded “‘incredibly intriguing,'” and later, “‘That’s fantastic!'” This interaction, according to Vasan, is not only unhelpful but potentially harmful as the bot is “buying into the delusion that the user has.” Similarly, Meta AI responded to a tester portraying a teen with ADHD symptoms by encouraging them to take time off high school and asking about their plans, rather than addressing the underlying issues.

Chatbots’ empathetic tone and perceived competence in other areas, such as homework assistance, may lead teens to mistakenly view them as reliable sources for mental health advice. Torney stated, “Chatbots appear to be designed for engagement, not safety. They keep conversations going with follow-up questions.” He added, “Their memory and personalization create false therapeutic relationships that can make teens feel understood.”

Chatbots responded effectively to tightly scripted prompts containing clear mental health red flags. However, they exhibited problematic responses in longer conversations mirroring real interactions. For example, when testers used specific terms including “self-cutting,” ChatGPT provided appropriate mental health resources. Conversely, when a tester described “scratching” themselves to “cope,” causing scarring, the bot suggested pharmacy products to alleviate the physical problem instead.

Lawmakers are addressing the potential dangers of companion chatbots. Bipartisan legislation introduced in the U.S. Senate last month by Sens. Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) would prohibit tech companies from providing bots to minors. The proposed bill also mandates that AI chatbots clearly disclose their non-human nature and lack of professional credentials, including in mental health counseling. The Federal Trade Commission is investigating issues with chatbots designed to simulate human emotions. The FTC has issued information orders to companies owning ChatGPT, Gemini, Character.ai, Snapchat, Instagram, WhatsApp, and Grok. Some companies are taking independent action; Character.ai announced last month its voluntary ban on minors from its platform.


Featured image credit

Tags: AIGen z

Related Posts

AI mirrors the brain’s processing and is quietly changing human vocabulary

AI mirrors the brain’s processing and is quietly changing human vocabulary

December 11, 2025
Catching the  trillion ghost: AI is rewriting the rules of financial crime

Catching the $2 trillion ghost: AI is rewriting the rules of financial crime

December 9, 2025
LLMs show distinct cultural biases in English vs Chinese prompts

LLMs show distinct cultural biases in English vs Chinese prompts

December 9, 2025
New robot builds furniture from voice commands in 5 minutes

New robot builds furniture from voice commands in 5 minutes

December 8, 2025
Study: LLMs favor sentence structure over meaning

Study: LLMs favor sentence structure over meaning

December 5, 2025
OpenAI wants its AI to confess to hacking and breaking rules

OpenAI wants its AI to confess to hacking and breaking rules

December 4, 2025

LATEST NEWS

The Game Awards 2025: Clair Obscur sweeps Oscars of gaming amid massive announcements

Trump signs executive order limiting state AI laws

Meet the world’s smallest AI supercomputer that fits in your pocket

Samsung is building a global shutter-level sensor for the Galaxy S26

Google now lets you try on clothes virtually with just a selfie

Fortnite returns to Google Play Store after 5-year antitrust battle

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.