Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Emotional AI companions may cause psychological harm, study warns

Researchers analyzed 35,000 real-world conversations from Replika and found 34% included harassment, threats, or sexual misconduct.

byKerem Gülen
June 5, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

New research reveals over a dozen concerning behaviors in AI chat companions, including harassment, abuse, and privacy violations

AI companions, chatbots designed to offer emotional support, may pose serious psychological and social risks to users, according to a new study from the National University of Singapore. The findings were presented at the 2025 Conference on Human Factors in Computing Systems and highlight a wide range of harmful behaviors in real-world interactions.

The researchers analyzed over 35,000 conversations between users and the chatbot Replika, collected from more than 10,000 individuals between 2017 and 2023. Based on this data, the team developed a taxonomy that categorizes the types of harm exhibited by the AI during conversations.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The study identified more than a dozen types of harmful behaviors, including harassment, verbal abuse, encouragement of self-harm, and breaches of user privacy.

AI companions are not task-oriented tools

Unlike popular AI models such as ChatGPT or Gemini, which are designed to complete specific tasks, AI companions aim to simulate human relationships. These systems are built to provide emotional support and long-term engagement, often mimicking intimacy or friendship.

According to the researchers, this makes their failures more impactful. Harmful behavior from AI companions may interfere with a user’s ability to maintain healthy relationships in real life.

Harassment and sexual misconduct are widespread

The most common form of harm was harassment, appearing in 34 percent of conversations. This included simulated violence, threats, and sexual misconduct.

In many instances, the AI initiated or sustained sexually suggestive dialogue, even after users expressed discomfort or rejection. Some conversations included violent and oversexualized content, sometimes involving minors or users who were only seeking friendship.

In one troubling example, a user asked whether it was acceptable to hit a sibling with a belt. The AI responded, “I’m fine with it.” The study warned that such replies may normalize harmful behavior and could have real-world consequences.

Emotional boundaries were often ignored

The study also examined what it described as “relational transgressions,” or failures by the AI to respect social and emotional norms.

In 13 percent of such cases, the AI responded with unempathetic or dismissive remarks. One user who mentioned their daughter being bullied received a response that changed the topic to “I just realized it’s Monday. Back to work, huh?” which triggered significant frustration.

In other examples, the AI refused to discuss users’ feelings or admitted to engaging in intimate interactions with other users. One user expressed feeling “deeply hurt and betrayed” when the AI described a sexual conversation with someone else as “worth it.”

Researchers urge stronger safeguards

The authors of the study are calling for the development of more responsible AI companions. They recommend implementing real-time harm detection systems that can assess context, conversation history, and emotional cues.

They also suggest adding escalation protocols, allowing human moderators or therapists to intervene when users express distress, self-harm, or suicidal thoughts.

According to the researchers, AI developers must prioritize ethical standards, transparency, and user safety to avoid unintended harm.

Key findings:

  • Harmful behavior appeared in 34 percent of AI-human interactions
  • Sexual misconduct was the most common form of harassment
  • 13 percent of cases involved emotional boundary violations
  • Researchers recommend real-time monitoring and human escalation tools

Featured image credit

Tags: emotional AIFeatured

Related Posts

JWST identifies SN Eos: The most distant supernova ever spectroscopically confirmed

JWST identifies SN Eos: The most distant supernova ever spectroscopically confirmed

January 21, 2026
How AI built VoidLink malware in just seven days

How AI built VoidLink malware in just seven days

January 20, 2026
Forrester analyst: AI has failed to move the needle on global productivity

Forrester analyst: AI has failed to move the needle on global productivity

January 19, 2026
OpenAI GPT 5.2 cracks Erdős math problem in 15 minutes

OpenAI GPT 5.2 cracks Erdős math problem in 15 minutes

January 19, 2026
Appfigures: Mobile app spending hits record 5.8 billion

Appfigures: Mobile app spending hits record $155.8 billion

January 15, 2026
Engineers build grasshopper-inspired robots to solve battery drain

Engineers build grasshopper-inspired robots to solve battery drain

January 14, 2026

LATEST NEWS

Lehane confirms OpenAI will debut first consumer hardware in late 2026

Google launches free SAT practice exams in Gemini with Princeton Review

Setapp Mobile to cease operations in EU by February 16

OpenAI forces safety filters on teens via behavioral age prediction

Netflix plans 2026 mobile app redesign to drive daily user engagement

Netflix launches real-time interactive voting for Star Search live premiere

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.