Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

AI made executives worse at stock picking

A new HBR experiment shows ChatGPT nudged executives into more optimistic and less accurate stock predictions than human peer groups.

byAytun Çelebi
July 2, 2025
in Research
Home Research

Over 300 managers and executives participated in a HBR study, revealing that those who consulted ChatGPT made less accurate stock predictions and exhibited greater optimism and overconfidence compared to those who engaged in peer discussions.

The experiment design

The experiment commenced with participants being shown a recent stock price chart for Nvidia (NVDA). Nvidia was selected due to its significant share price increase, driven by its integral role in powering AI technologies. Each participant was initially asked to make an individual, private forecast regarding Nvidia’s projected stock price one month into the future.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Following their initial forecasts, participants were randomly divided into two distinct groups:

  • The control group: These executives conversed in small groups, relying solely on human interaction to share thoughts and information, emulating traditional decision-making processes.
  • The treatment group: Executives in this group consulted ChatGPT about Nvidia’s stock but were explicitly instructed not to communicate with any peers, representing an AI-assisted decision process.

After their respective consultation periods, all participants submitted a revised forecast for Nvidia’s stock price one month ahead.

Key findings: Optimism, inaccuracy, and overconfidence

The study’s findings indicated that AI consultation led to more optimistic forecasts. While both groups had similar baseline expectations, the ChatGPT group elevated their one-month price estimates by approximately $5.11 on average after consultation.

In contrast, peer discussions resulted in more conservative forecasts, with the group lowering their price estimates by approximately $2.20 on average.

A critical finding was that AI consultation negatively impacted prediction accuracy. After a one-month waiting period, the analysis revealed:

  • Those who had consulted ChatGPT made predictions that were less accurate after their consultation than before.
  • Executives who engaged in peer discussions made significantly more accurate predictions following their consultation.

AI consultation also contributed to increased overconfidence. Consulting ChatGPT significantly heightened participants’ propensity to offer pinpoint predictions (forecasts with decimals), an indicator of overconfidence. Conversely, the peer discussion group became less likely to use pinpoint predictions, indicating a decrease in overconfidence.

Why the disparity? Five key factors

The disparity in outcomes can be attributed to five key factors:

  • Extrapolation bias and “trend riding”: ChatGPT, relying on historical data of Nvidia’s rising stock, likely encouraged an extrapolation bias, assuming past upward trends would continue without real-time context.
  • Authority bias and detail overload: Many executives were impressed by the AI’s detailed and confident responses, leading to an “AI authority bias” where they gave more weight to the AI’s suggestions than their own judgment.
  • Absence of emotion in AI: The AI lacks human emotions like wariness or a “fear of heights” when observing a soaring price chart. This lack of a cautious “gut-check” allowed bolder, unchallenged forecasts.
  • Peer calibration and social dynamics: Peer discussions introduced diverse viewpoints and a “don’t be the sucker” mentality, where individuals moderated extreme views to avoid appearing naive, leading to a more conservative consensus.
  • The illusion of knowledge: Access to a sophisticated tool like ChatGPT gave participants an “illusion of knowing everything,” making them more susceptible to overconfidence.

Guidance for leaders and organizations

The study’s findings offer important guidance for integrating AI tools into decision-making:

  • Retain and leverage human discussion: An optimal approach may involve combining AI input with human dialogue. Use an AI for a preliminary assessment, then convene a team to debate its findings.
  • Critical thinking is fundamental: Treat AI as a starting point for inquiry, not the definitive answer. Question its data sources and potential blind spots.
  • Train teams and establish guidelines: Communicate that AI might foster overconfidence. Institutionalize a blend of AI and human input to protect against unbalanced influences.

The study acknowledges its limitations, including its controlled setting, focus on a single stock (Nvidia), and the use of a ChatGPT model without real-time market data. These factors suggest that results might vary in different contexts or with different AI tools.


Featured image credit

Tags: AIFeatured

Related Posts

Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

September 10, 2025
Uc San Diego study questions phishing training impact

Uc San Diego study questions phishing training impact

September 8, 2025
Deepmind finds RAG limit with fixed-size embeddings

Deepmind finds RAG limit with fixed-size embeddings

September 5, 2025
Psychopathia Machinalis and the path to “Artificial Sanity”

Psychopathia Machinalis and the path to “Artificial Sanity”

September 1, 2025
New research finds AI prefers content from other AIs

New research finds AI prefers content from other AIs

August 29, 2025
87% of game devs already use AI tools survey finds

87% of game devs already use AI tools survey finds

August 27, 2025

LATEST NEWS

Spotify Premium to add 24-bit FLAC lossless audio

Bending Spoons to acquire Vimeo for $1.38 billion

Nintendo Direct September 2025: What’s coming for Nintendo Switch and Switch 2?

China develops SpikingBrain1.0, a brain-inspired AI model

TwinMind raises $5.7M to launch AI second brain for offline note-taking

YouTube Music tests lyrics paywall for free users

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.