Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

AI made executives worse at stock picking

A new HBR experiment shows ChatGPT nudged executives into more optimistic and less accurate stock predictions than human peer groups.

byAytun Çelebi
July 2, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Over 300 managers and executives participated in a HBR study, revealing that those who consulted ChatGPT made less accurate stock predictions and exhibited greater optimism and overconfidence compared to those who engaged in peer discussions.

The experiment design

The experiment commenced with participants being shown a recent stock price chart for Nvidia (NVDA). Nvidia was selected due to its significant share price increase, driven by its integral role in powering AI technologies. Each participant was initially asked to make an individual, private forecast regarding Nvidia’s projected stock price one month into the future.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Following their initial forecasts, participants were randomly divided into two distinct groups:

  • The control group: These executives conversed in small groups, relying solely on human interaction to share thoughts and information, emulating traditional decision-making processes.
  • The treatment group: Executives in this group consulted ChatGPT about Nvidia’s stock but were explicitly instructed not to communicate with any peers, representing an AI-assisted decision process.

After their respective consultation periods, all participants submitted a revised forecast for Nvidia’s stock price one month ahead.

Key findings: Optimism, inaccuracy, and overconfidence

The study’s findings indicated that AI consultation led to more optimistic forecasts. While both groups had similar baseline expectations, the ChatGPT group elevated their one-month price estimates by approximately $5.11 on average after consultation.

In contrast, peer discussions resulted in more conservative forecasts, with the group lowering their price estimates by approximately $2.20 on average.

A critical finding was that AI consultation negatively impacted prediction accuracy. After a one-month waiting period, the analysis revealed:

  • Those who had consulted ChatGPT made predictions that were less accurate after their consultation than before.
  • Executives who engaged in peer discussions made significantly more accurate predictions following their consultation.

AI consultation also contributed to increased overconfidence. Consulting ChatGPT significantly heightened participants’ propensity to offer pinpoint predictions (forecasts with decimals), an indicator of overconfidence. Conversely, the peer discussion group became less likely to use pinpoint predictions, indicating a decrease in overconfidence.

Why the disparity? Five key factors

The disparity in outcomes can be attributed to five key factors:

  • Extrapolation bias and “trend riding”: ChatGPT, relying on historical data of Nvidia’s rising stock, likely encouraged an extrapolation bias, assuming past upward trends would continue without real-time context.
  • Authority bias and detail overload: Many executives were impressed by the AI’s detailed and confident responses, leading to an “AI authority bias” where they gave more weight to the AI’s suggestions than their own judgment.
  • Absence of emotion in AI: The AI lacks human emotions like wariness or a “fear of heights” when observing a soaring price chart. This lack of a cautious “gut-check” allowed bolder, unchallenged forecasts.
  • Peer calibration and social dynamics: Peer discussions introduced diverse viewpoints and a “don’t be the sucker” mentality, where individuals moderated extreme views to avoid appearing naive, leading to a more conservative consensus.
  • The illusion of knowledge: Access to a sophisticated tool like ChatGPT gave participants an “illusion of knowing everything,” making them more susceptible to overconfidence.

Guidance for leaders and organizations

The study’s findings offer important guidance for integrating AI tools into decision-making:

  • Retain and leverage human discussion: An optimal approach may involve combining AI input with human dialogue. Use an AI for a preliminary assessment, then convene a team to debate its findings.
  • Critical thinking is fundamental: Treat AI as a starting point for inquiry, not the definitive answer. Question its data sources and potential blind spots.
  • Train teams and establish guidelines: Communicate that AI might foster overconfidence. Institutionalize a blend of AI and human input to protect against unbalanced influences.

The study acknowledges its limitations, including its controlled setting, focus on a single stock (Nvidia), and the use of a ChatGPT model without real-time market data. These factors suggest that results might vary in different contexts or with different AI tools.


Featured image credit

Tags: AIFeatured

Related Posts

OpenAI wants its AI to confess to hacking and breaking rules

OpenAI wants its AI to confess to hacking and breaking rules

December 4, 2025
MIT: AI capability outpaces current adoption by five times

MIT: AI capability outpaces current adoption by five times

December 2, 2025
Study shows AI summaries kill motivation to check sources

Study shows AI summaries kill motivation to check sources

December 2, 2025
Study finds poetry bypasses AI safety filters 62% of time

Study finds poetry bypasses AI safety filters 62% of time

December 1, 2025
Stanford’s Evo AI designs novel proteins using genomic language models

Stanford’s Evo AI designs novel proteins using genomic language models

December 1, 2025
Your future quantum computer might be built on standard silicon after all

Your future quantum computer might be built on standard silicon after all

November 25, 2025

LATEST NEWS

Leaked: Xiaomi 17 Ultra has 200MP periscope camera

Leak reveals Samsung EP-P2900 25W magnetic charging dock

Kobo quietly updates Libra Colour with larger 2,300 mAh battery

Google Discover tests AI headlines that rewrite news with errors

TikTok rolls out location-based Nearby Feed

Meta claims AI reduced hacks by 30% as it revamps support tools

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.