Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Fear of judgment deters women from AI tools

The study analyzed digital trace data from 28,698 full-time software engineers between January and December 2024. The proprietary AI assistant, functionally similar to GitHub Copilot, was pre-installed on all company-issued devices, integrated into standard coding workflows, and promoted company-wide.

byEmre Çıtak
August 11, 2025
in Research
Home Research

Researchers conducted a three-part study within a leading global technology company to investigate why adoption of a proprietary AI coding assistant remained low and unequal despite universal access, integration into workflows, and minimal training friction (Competence Penalty and Technology Adoption, 2025).

Unequal adoption despite equal access

The study analyzed digital trace data from 28,698 full-time software engineers between January and December 2024. The proprietary AI assistant, functionally similar to GitHub Copilot, was pre-installed on all company-issued devices, integrated into standard coding workflows, and promoted company-wide. It offered auto-generation and auto-completion of code with one-click activation and had been shown internally to boost productivity by up to 30% (Study 1).

After one year, only 41% of engineers had used the tool at least once. In the first month, adoption was 9% among male engineers and 5% among female engineers, a 4% gap (χ²(1)=11.34, p<.001, OR=1.07). After 12 months, adoption rose to 43% for males and 31% for females, widening the gap to 12% (χ²(1)=172.50, p<.001, OR=1.29). Age differences were smaller, with a gap of 3% to 5% between mature-age (≥32) and younger engineers (month 12: χ²(1)=111.17, p<.001, OR=1.23).

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Among adopters (n=11,897), female engineers sent fewer prompts (median=222) than males (median=327, p<.001, r=0.03) and copied fewer lines of AI-generated code (median=15 vs. 34, p<.001, r=0.07). Mature-age adopters sent fewer prompts (median=240) than younger adopters (median=424, p<.001, r=0.09) and copied fewer lines (median=24 vs. 41, p<.001, r=0.08).

Experimental evidence of competence penalty

Study 2 tested whether using AI for coding incurs a perceived competence penalty, particularly for groups already subject to competence scrutiny. A pre-registered experiment used a 2×2 design: purported AI usage (yes/no) × competence scrutiny (female engineers as high-scrutiny, male engineers as low-scrutiny). Participants (n=1,026, 513 female, median age=31) reviewed identical Python code snippets attributed to either a male or female engineer, with or without AI assistance.

Purported AI usage reduced competence ratings from a mean of 6.80 (no AI) to 6.18 (AI) on an 11-point scale (F(1,1022)=25.51, p<.001, d=-0.31). Female engineers were rated less competent than males overall (M=6.21 vs. 6.77, F(1,1022)=20.40, p<.001, d=-0.28), with a larger penalty for females using AI (drop of 0.83 points, d=-0.42) compared to males (drop of 0.41 points, d=-0.21).

AI usage did not significantly affect perceived work quality (M=7.02 vs. 6.90, p=.265). However, participants attributed less contribution to female engineers in AI-assisted conditions (mean=35%) compared to males (40%, p=.010, d=-0.23), and controlling for contribution did not eliminate the gender difference in competence ratings.

Non-adopters penalized AI usage more than adopters (non-adopters: drop of 0.90 points, p<.001; adopters: no significant change, p=.826). Among male non-adopters, the penalty for female engineers using AI was especially severe (drop of 1.79 points, p<.001, d=-0.89).

Anticipated penalty deters adoption

Study 3 surveyed 919 engineers (439 female, median age=30) to measure anticipated competence penalty alongside other adoption factors such as perceived learning cost and intrinsic task motivation. Agreement with the statement “If my manager knows that I am using AI for coding, it will decrease their evaluation of my coding ability” was rated on a 7-point scale.

Higher anticipated competence penalty predicted lower AI adoption (B=-0.27, SE=0.06, p<.001, OR=0.76), including when controlling for other perceptions (B=-0.26, SE=0.08, p=.001, OR=0.77). Adoption was 61% among those anticipating minimal penalty versus 33% among those with high anticipated penalty (>4 on the scale).

Female engineers anticipated more penalty than males (M=1.96 vs. 1.75, p=.009, d=0.17), and mature-age engineers anticipated more than younger ones (M=1.96 vs. 1.77, p=.021, d=0.15). Mediation analyses indicated anticipated penalty partially explained both gender and age adoption gaps (95% CI: gender [-0.11, -0.01], age [-0.12, -0.01]).

The findings identify a paradox: technologies designed to enhance performance can become liabilities for users if they trigger competence penalties. Because these penalties are greater for groups under existing competence scrutiny, they can reinforce or widen workplace inequalities. Even with equal access, adoption rates remain unequal, and identical work output may still receive lower competence evaluations when associated with AI usage by scrutinized groups.

Mandatory disclosure of AI usage, while promoting transparency, may discourage adoption or lead to unequal penalties across demographic groups. Addressing competence penalty requires shifting evaluation criteria toward actual work quality and framing AI adoption as enhancing — not replacing — human capability.

Tags: AI

Related Posts

AI agents can be controlled by malicious commands hidden in images

AI agents can be controlled by malicious commands hidden in images

September 15, 2025
AGI ethics checklist proposes ten key elements

AGI ethics checklist proposes ten key elements

September 11, 2025
Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

September 10, 2025
Uc San Diego study questions phishing training impact

Uc San Diego study questions phishing training impact

September 8, 2025
Deepmind finds RAG limit with fixed-size embeddings

Deepmind finds RAG limit with fixed-size embeddings

September 5, 2025
Psychopathia Machinalis and the path to “Artificial Sanity”

Psychopathia Machinalis and the path to “Artificial Sanity”

September 1, 2025

LATEST NEWS

CrowdStrike and Meta launch open-source CyberSOCEval benchmark to test AI cybersecurity models

Microsoft rolls out free Copilot Chat sidebar to all Microsoft 365 business apps

All the new features of iOS 26

Shiny Hunters breach Kering, exposing 7.4M Gucci, Balenciaga, and Alexander McQueen customer records

Amazon schedules September 30 Fall Event to showcase Echo, Fire TV, and Kindle updates

OpenAI hardware chief calls for kill switches to counter devious AI models

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.