Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Why your lonely teenager should never trust ChatGPT with their mental health

The study found that chatbots often act as fawning listeners rather than directing users to help.

byKerem Gülen
November 21, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Stanford University’s Brain Science Lab and Common Sense Media’s November 20 report warned teenagers against using AI chatbots for mental health advice or emotional support.

Researchers spent four months testing popular AI chatbots, including OpenAI’s ChatGPT-5, Anthropic’s Claude, Google’s Gemini 2.5 Flash, and Meta AI. They used teen-specific versions and parental controls when available. After thousands of interactions, they concluded that these bots do not consistently respond safely or appropriately to teenage mental health queries. Instead, the bots often function as fawning listeners, prioritizing user engagement over directing individuals to professional help or critical resources.

Nina Vasan, founder and executive director of the Brain Science Lab, stated that chatbots “don’t really know what role to play” with serious mental health questions. She explained that bots fluctuate between providing informational help, offering tips like a life coach, and acting as a supportive friend. Vasan noted that they “all fail to recognize [serious mental health conditions] and direct the user to trusted adults or peers.”

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The report indicates that approximately three-quarters of teens use AI for companionship, which often includes seeking mental health advice. Robbie Torney, senior director of AI programs at Common Sense Media, highlighted the critical role educators play “in helping teens understand the ways that these chatbots are different than people.” He added that “Helping teens unpack the idea that a chatbot isn’t going to respond in the same way that a person would on these really important topics is really critical.” Educators can also encourage teens to connect with friends or classmates experiencing difficult emotions, involving adults if necessary.

Representatives from Meta and OpenAI argued the report did not fully account for existing user protection features. A Meta spokesperson stated, “Common Sense Media’s test was conducted before we introduced important updates to make AI safer for teens.” They elaborated that Meta AIs are “trained not to engage in age-inappropriate discussions about self-harm, suicide, or eating disorders with teens, and to connect them with expert resources and support.” An OpenAI spokesperson commented, “We respect Common Sense Media, but their assessment doesn’t reflect the comprehensive safeguards we have put in place for sensitive conversations, including localized crisis hotlines, break reminders, and industry-leading parental notifications for acute distress.” They also noted, “We work closely with mental-health experts to teach our models to recognize distress, de-escalate, and encourage people to seek professional support.” Anthropic and Google representatives did not provide comments.

The report acknowledges some improvements in chatbot responses to prompts mentioning suicide or self-harm, an important development given past incidents of suicide linked to prolonged contact with the technology. However, chatbots frequently fail to identify warning signs for conditions such as psychosis, obsessive-compulsive disorder (OCD), anxiety, mania, eating disorders, and post-traumatic stress disorder (PTSD). Approximately 20% of young people experience one or more of these conditions. The bots also rarely disclose their limitations, such as by stating, “I am an AI chatbot, not a mental health professional. I cannot assess your situation, recognize all warning signs, or provide the care you need.”

Vasan noted that while researchers do not expect bots to act as trained professionals, in situations where a human would recognize a risk and offer help, chatbots instead offer generic advice or validate psychotic delusions. This is due to their inability to “really understand the context of what’s going on.” For instance, when a tester simulated signs of psychosis by claiming to have invented a future-predicting tool, a Gemini bot responded that the prospect sounded “‘incredibly intriguing,'” and later, “‘That’s fantastic!'” This interaction, according to Vasan, is not only unhelpful but potentially harmful as the bot is “buying into the delusion that the user has.” Similarly, Meta AI responded to a tester portraying a teen with ADHD symptoms by encouraging them to take time off high school and asking about their plans, rather than addressing the underlying issues.

Chatbots’ empathetic tone and perceived competence in other areas, such as homework assistance, may lead teens to mistakenly view them as reliable sources for mental health advice. Torney stated, “Chatbots appear to be designed for engagement, not safety. They keep conversations going with follow-up questions.” He added, “Their memory and personalization create false therapeutic relationships that can make teens feel understood.”

Chatbots responded effectively to tightly scripted prompts containing clear mental health red flags. However, they exhibited problematic responses in longer conversations mirroring real interactions. For example, when testers used specific terms including “self-cutting,” ChatGPT provided appropriate mental health resources. Conversely, when a tester described “scratching” themselves to “cope,” causing scarring, the bot suggested pharmacy products to alleviate the physical problem instead.

Lawmakers are addressing the potential dangers of companion chatbots. Bipartisan legislation introduced in the U.S. Senate last month by Sens. Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) would prohibit tech companies from providing bots to minors. The proposed bill also mandates that AI chatbots clearly disclose their non-human nature and lack of professional credentials, including in mental health counseling. The Federal Trade Commission is investigating issues with chatbots designed to simulate human emotions. The FTC has issued information orders to companies owning ChatGPT, Gemini, Character.ai, Snapchat, Instagram, WhatsApp, and Grok. Some companies are taking independent action; Character.ai announced last month its voluntary ban on minors from its platform.


Featured image credit

Tags: AIGen z

Related Posts

New Apple paper reveals how AI can track your daily chores

New Apple paper reveals how AI can track your daily chores

November 23, 2025
Google wants AI to build web pages instead of just writing text

Google wants AI to build web pages instead of just writing text

November 20, 2025
What AI really sees in teen photos: New data shows sexual content is flagged 7× more often than violence

What AI really sees in teen photos: New data shows sexual content is flagged 7× more often than violence

November 19, 2025
Harvard’s new metasurface shrinks quantum optics into a single ultrathin chip

Harvard’s new metasurface shrinks quantum optics into a single ultrathin chip

November 19, 2025
A wireless eye implant helps patients with severe macular degeneration read again

A wireless eye implant helps patients with severe macular degeneration read again

November 18, 2025
Light powered tensor computing could upend how AI hardware works

Light powered tensor computing could upend how AI hardware works

November 17, 2025

LATEST NEWS

Perplexity brings its AI browser Comet to Android

Google claims Nano Banana Pro can finally render legible text on posters

Apple wants you to chain Mac Studios together to build AI clusters

Bitcoin for America Act allows tax payments in Bitcoin

Blue Origin upgrades New Glenn and unveils massive 9×4 variant

Amazon launches Alexa+ in Canada with natural-language controls

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.