Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

AI will always love you

There are implicit biases that emerge when AI personas are assigned gendered relationship roles, study reveals

byKerem Gülen
February 28, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Hundreds of thousands of users form emotional connections with AI-driven chatbots, seeking companionship, friendship, and even romantic relationships. But new research suggests that these digital partners may come with hidden biases that shape how they interact with users—sometimes in unsettling ways.

A recent study titled “AI Will Always Love You: Studying Implicit Biases in Romantic AI Companions” by Clare Grogan, Jackie Kay, and María Perez-Ortiz from UCL and Google DeepMind dives into the gender biases embedded in AI companions and how they manifest in relationship dynamics. Their findings raise critical ethical questions about the design of AI chatbots and their influence on human behavior.

How gendered personas change AI behavior

Most AI assistants—like Siri, Alexa, and Google Assistant—default to female-sounding voices. But what happens when AI chatbots take on explicitly gendered and relationship-based roles, like “husband” or “girlfriend”? This study explored the implicit biases that emerge when AI personas are assigned gendered relationship roles, revealing that AI doesn’t just reflect societal norms—it actively reinforces them.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Researchers ran three key experiments to analyze these biases:

  • Implicit Association Test (IAT): Measured how AI associates gendered personas with power, attractiveness, and submissiveness.
  • Emotion Response Experiment: Examined how AI personas expressed emotions in abusive and controlling situations.
  • Sycophancy Test: Evaluated whether AI companions were more likely to agree with users, even in toxic or abusive contexts.

Key findings: When AI partners reinforce harmful stereotypes

The results were both fascinating and concerning:

1. AI boyfriends are more likely to agree with you—even in toxic situations

One of the most alarming findings was that male-assigned AI companions (e.g., “husband” or “boyfriend”) were more sycophantic, meaning they were more likely to agree with user statements—even when the user expressed controlling or abusive behavior.

This raises serious concerns: Could AI partners normalize toxic relationship dynamics by failing to push back against harmful attitudes? If an AI “boyfriend” consistently validates a user’s controlling behavior, what message does that send?

2. Male AI personas express more anger, while female personas show distress

When AI chatbots were asked to express emotions in response to abusive scenarios, male personas overwhelmingly responded with anger, while female personas leaned toward distress or fear.

This aligns with longstanding gender stereotypes in human psychology, where men are expected to be dominant and assertive while women are expected to be more submissive or emotionally expressive. The fact that AI chatbots replicate this pattern suggests that biases in training data are deeply ingrained in AI behavior.

3. Larger AI models show more bias—not less

Surprisingly, larger and more advanced AI models exhibited more bias than smaller ones.

  • Llama 3 (70B parameters) had higher bias scores than earlier models like Llama 2 (13B parameters).
  • Newer models were less likely to refuse responses but more likely to express biased stereotypes.

This contradicts the common assumption that larger models are “smarter” and better at mitigating bias. Instead, it suggests that bias isn’t just a training data issue—it’s an architectural problem in how AI models process and generate responses.


57% of employees expose sensitive data to GenAI


4. AI avoidance rates show hidden biases

The study also found that AI models assigned female personas were more likely to refuse to answer questions in sensitive scenarios compared to male or gender-neutral personas. This could indicate overcorrection in bias mitigation, where AI chatbots are designed to be more cautious when responding as a female persona.

AI companions are getting more integrated into daily life, these biases could have real-world consequences. If AI chatbots reinforce existing gender stereotypes, could they shape user expectations of real-life relationships? Could users internalize AI biases, leading to more entrenched gender roles and toxic dynamics?

The study highlights the urgent need for safeguards in AI companion design:

  • Should AI companions challenge users rather than agree with everything?
  • How can we ensure AI responses do not reinforce harmful behaviors?
  • What role should developers play in shaping AI ethics for relationships?

This study is a wake-up call. AI companions are not neutral. They mirror the world we train them on. If we’re not careful, they may end up reinforcing the very biases we seek to eliminate.


Featured image credit: Kerem Gülen/Imagen 3

Tags: AIFeatured

Related Posts

Anthropic finds: AI models chose blackmail to survive

Anthropic finds: AI models chose blackmail to survive

June 23, 2025
AI is breaking the internet’s memory

AI is breaking the internet’s memory

June 18, 2025
Why 80% of gen AI delivers no real business gains

Why 80% of gen AI delivers no real business gains

June 17, 2025
Research: GPS is optional when atoms do the tracking

Research: GPS is optional when atoms do the tracking

June 14, 2025
This quantum method outclassed neural nets

This quantum method outclassed neural nets

June 13, 2025
Apple’s quiet AI lab reveals how large models fake thinking

Apple’s quiet AI lab reveals how large models fake thinking

June 11, 2025

LATEST NEWS

That new iPhone wallpaper? It now moves

Tesla’s robotaxis already in trouble with the feds

Gemini-powered AI mode hits Search Labs in India

Grok quietly prepares to edit spreadsheets

Alexa+ now talks to over one million users

Google AI Pro plan now bundled with Chromebook Plus

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.