Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

AI will always love you

There are implicit biases that emerge when AI personas are assigned gendered relationship roles, study reveals

byKerem Gülen
February 28, 2025
in Research

Hundreds of thousands of users form emotional connections with AI-driven chatbots, seeking companionship, friendship, and even romantic relationships. But new research suggests that these digital partners may come with hidden biases that shape how they interact with users—sometimes in unsettling ways.

A recent study titled “AI Will Always Love You: Studying Implicit Biases in Romantic AI Companions” by Clare Grogan, Jackie Kay, and María Perez-Ortiz from UCL and Google DeepMind dives into the gender biases embedded in AI companions and how they manifest in relationship dynamics. Their findings raise critical ethical questions about the design of AI chatbots and their influence on human behavior.

How gendered personas change AI behavior

Most AI assistants—like Siri, Alexa, and Google Assistant—default to female-sounding voices. But what happens when AI chatbots take on explicitly gendered and relationship-based roles, like “husband” or “girlfriend”? This study explored the implicit biases that emerge when AI personas are assigned gendered relationship roles, revealing that AI doesn’t just reflect societal norms—it actively reinforces them.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Researchers ran three key experiments to analyze these biases:

  • Implicit Association Test (IAT): Measured how AI associates gendered personas with power, attractiveness, and submissiveness.
  • Emotion Response Experiment: Examined how AI personas expressed emotions in abusive and controlling situations.
  • Sycophancy Test: Evaluated whether AI companions were more likely to agree with users, even in toxic or abusive contexts.

Key findings: When AI partners reinforce harmful stereotypes

The results were both fascinating and concerning:

1. AI boyfriends are more likely to agree with you—even in toxic situations

One of the most alarming findings was that male-assigned AI companions (e.g., “husband” or “boyfriend”) were more sycophantic, meaning they were more likely to agree with user statements—even when the user expressed controlling or abusive behavior.

This raises serious concerns: Could AI partners normalize toxic relationship dynamics by failing to push back against harmful attitudes? If an AI “boyfriend” consistently validates a user’s controlling behavior, what message does that send?

2. Male AI personas express more anger, while female personas show distress

When AI chatbots were asked to express emotions in response to abusive scenarios, male personas overwhelmingly responded with anger, while female personas leaned toward distress or fear.

This aligns with longstanding gender stereotypes in human psychology, where men are expected to be dominant and assertive while women are expected to be more submissive or emotionally expressive. The fact that AI chatbots replicate this pattern suggests that biases in training data are deeply ingrained in AI behavior.

3. Larger AI models show more bias—not less

Surprisingly, larger and more advanced AI models exhibited more bias than smaller ones.

  • Llama 3 (70B parameters) had higher bias scores than earlier models like Llama 2 (13B parameters).
  • Newer models were less likely to refuse responses but more likely to express biased stereotypes.

This contradicts the common assumption that larger models are “smarter” and better at mitigating bias. Instead, it suggests that bias isn’t just a training data issue—it’s an architectural problem in how AI models process and generate responses.


57% of employees expose sensitive data to GenAI


4. AI avoidance rates show hidden biases

The study also found that AI models assigned female personas were more likely to refuse to answer questions in sensitive scenarios compared to male or gender-neutral personas. This could indicate overcorrection in bias mitigation, where AI chatbots are designed to be more cautious when responding as a female persona.

AI companions are getting more integrated into daily life, these biases could have real-world consequences. If AI chatbots reinforce existing gender stereotypes, could they shape user expectations of real-life relationships? Could users internalize AI biases, leading to more entrenched gender roles and toxic dynamics?

The study highlights the urgent need for safeguards in AI companion design:

  • Should AI companions challenge users rather than agree with everything?
  • How can we ensure AI responses do not reinforce harmful behaviors?
  • What role should developers play in shaping AI ethics for relationships?

This study is a wake-up call. AI companions are not neutral. They mirror the world we train them on. If we’re not careful, they may end up reinforcing the very biases we seek to eliminate.


Featured image credit: Kerem Gülen/Imagen 3

Tags: AIFeatured

Related Posts

Cyberattacks are now killing patients not just crashing systems

Cyberattacks are now killing patients not just crashing systems

October 21, 2025
Gen Z workers are telling AI things they’ve never told a human

Gen Z workers are telling AI things they’ve never told a human

October 20, 2025
MIT researchers have built an AI that teaches itself how to learn

MIT researchers have built an AI that teaches itself how to learn

October 20, 2025
Apple builds an AI “engineering team” that finds and fixes bugs on its own

Apple builds an AI “engineering team” that finds and fixes bugs on its own

October 17, 2025
Graphite: 52% of new content is AI-generated

Graphite: 52% of new content is AI-generated

October 17, 2025
Just 250 bad documents can poison a massive AI model

Just 250 bad documents can poison a massive AI model

October 15, 2025

LATEST NEWS

Is ChatGPT down again? Reports indicate ongoing outage

Path of Exile: Keepers of the Flame will be the Breach 2.0!

Google Meet now lets you move people in and out of meetings like a lobby

Sam Altman: AI will cause “strange or scary moments”

Anthropic gives Claude a real memory and lets users edit it directly

Nissan’s Sakura EV gets a solar roof that adds 1,800 miles a year

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.