Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Can AI really reason about cause and effect? A new study puts LLMs to the test

The researchers compared human reasoning with four LLMs—GPT-3.5, GPT-4o, Claude-3, and Gemini-Pro—using collider graphs, a classic test in causal inference

byKerem Gülen
February 17, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

A new study from New York University and the University of Tübingen, led by Hanna M. Dettki, Brenden M. Lake, Charley M. Wu, and Bob Rehder, asks whether AI can reason about causes as humans do or if it relies on patterns instead. Their paper, “Do Large Language Models Reason Causally Like Us? Even Better?”, probes four popular models—GPT-3.5, GPT-4o, Claude-3, and Gemini-Pro—to see whether they grasp complex causal structures or merely mimic human language.

How the study tested causal reasoning in AI

The researchers compared human reasoning with four LLMs—GPT-3.5, GPT-4o, Claude-3, and Gemini-Pro—using collider graphs, a classic test in causal inference. Participants (both human and AI) were asked to evaluate the likelihood of an event given certain causal relationships. The core question: do LLMs reason causally in the same way humans do, or do they follow a different logic?


AI now handles molecular simulations: Thanks to MDCrow

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.


Key findings: AI can reason but not like humans

The results revealed a spectrum of causal reasoning among AI models.

  • GPT-4o and Claude-3 showed the most normative reasoning, meaning they followed probability theory more closely than human participants.
  • Gemini-Pro and GPT-3.5, on the other hand, displayed more associative reasoning, meaning they relied more on statistical patterns rather than strict causal logic.
  • All models exhibited biases, deviating from expected independence of causes. However, Claude-3 was the least biased, meaning it adhered most closely to mathematical causal norms.

Interestingly, humans often apply heuristics that deviate from strict probability theory—such as the “explaining away” effect, where observing one cause reduces the likelihood of another. While AI models recognized this effect, their responses varied significantly based on training data and context.

AI vs. human reasoning: A fundamental difference

One of the most intriguing insights from the study is that LLMs don’t just mimic human reasoning—they approach causality differently. Unlike humans, whose judgments remained relatively stable across different contexts, AI models adjusted their reasoning depending on domain knowledge (e.g., economics vs. sociology).

  • GPT-4o, in particular, treated causal links as deterministic, assuming that certain causes always produce specific effects.
  • Humans, by contrast, factor in uncertainty, acknowledging that causal relationships are not always absolute.

This suggests that while AI can be more precise in certain structured tasks, it lacks the flexibility of human thought when dealing with ambiguous or multi-causal situations.

Why this matters for AI in decision-making

The study reveals an important limitation: LLMs may not generalize causal knowledge beyond their training data without strong guidance. This has critical implications for deploying AI in real-world decision-making, from medical diagnoses to economic forecasting.

LLMs might outperform humans in probability-based inference but their reasoning remains fundamentally different—often lacking the intuitive, adaptive logic humans use in everyday problem-solving.

In other words, AI can reason about causality—but not quite like us.


Featured image credit: Kerem Gülen/Ideogram

Tags: AIFeaturedllm

Related Posts

AI mirrors the brain’s processing and is quietly changing human vocabulary

AI mirrors the brain’s processing and is quietly changing human vocabulary

December 11, 2025
Catching the  trillion ghost: AI is rewriting the rules of financial crime

Catching the $2 trillion ghost: AI is rewriting the rules of financial crime

December 9, 2025
LLMs show distinct cultural biases in English vs Chinese prompts

LLMs show distinct cultural biases in English vs Chinese prompts

December 9, 2025
New robot builds furniture from voice commands in 5 minutes

New robot builds furniture from voice commands in 5 minutes

December 8, 2025
Study: LLMs favor sentence structure over meaning

Study: LLMs favor sentence structure over meaning

December 5, 2025
OpenAI wants its AI to confess to hacking and breaking rules

OpenAI wants its AI to confess to hacking and breaking rules

December 4, 2025

LATEST NEWS

GPT-5.2: OpenAI officially launches its flagship model

Google launches Android Emergency Live Video in US, Germany, Mexico

Instagram launches Your Algorithm for Reels

DOE announces $320M for Genesis Mission AI initiative

Xbox year in review 2025 remains unavailable

DeepMind to open first AI science lab in UK 2026

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.