Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Can AI really reason about cause and effect? A new study puts LLMs to the test

The researchers compared human reasoning with four LLMs—GPT-3.5, GPT-4o, Claude-3, and Gemini-Pro—using collider graphs, a classic test in causal inference

byKerem Gülen
February 17, 2025
in Research

A new study from New York University and the University of Tübingen, led by Hanna M. Dettki, Brenden M. Lake, Charley M. Wu, and Bob Rehder, asks whether AI can reason about causes as humans do or if it relies on patterns instead. Their paper, “Do Large Language Models Reason Causally Like Us? Even Better?”, probes four popular models—GPT-3.5, GPT-4o, Claude-3, and Gemini-Pro—to see whether they grasp complex causal structures or merely mimic human language.

How the study tested causal reasoning in AI

The researchers compared human reasoning with four LLMs—GPT-3.5, GPT-4o, Claude-3, and Gemini-Pro—using collider graphs, a classic test in causal inference. Participants (both human and AI) were asked to evaluate the likelihood of an event given certain causal relationships. The core question: do LLMs reason causally in the same way humans do, or do they follow a different logic?


AI now handles molecular simulations: Thanks to MDCrow

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.


Key findings: AI can reason but not like humans

The results revealed a spectrum of causal reasoning among AI models.

  • GPT-4o and Claude-3 showed the most normative reasoning, meaning they followed probability theory more closely than human participants.
  • Gemini-Pro and GPT-3.5, on the other hand, displayed more associative reasoning, meaning they relied more on statistical patterns rather than strict causal logic.
  • All models exhibited biases, deviating from expected independence of causes. However, Claude-3 was the least biased, meaning it adhered most closely to mathematical causal norms.

Interestingly, humans often apply heuristics that deviate from strict probability theory—such as the “explaining away” effect, where observing one cause reduces the likelihood of another. While AI models recognized this effect, their responses varied significantly based on training data and context.

AI vs. human reasoning: A fundamental difference

One of the most intriguing insights from the study is that LLMs don’t just mimic human reasoning—they approach causality differently. Unlike humans, whose judgments remained relatively stable across different contexts, AI models adjusted their reasoning depending on domain knowledge (e.g., economics vs. sociology).

  • GPT-4o, in particular, treated causal links as deterministic, assuming that certain causes always produce specific effects.
  • Humans, by contrast, factor in uncertainty, acknowledging that causal relationships are not always absolute.

This suggests that while AI can be more precise in certain structured tasks, it lacks the flexibility of human thought when dealing with ambiguous or multi-causal situations.

Why this matters for AI in decision-making

The study reveals an important limitation: LLMs may not generalize causal knowledge beyond their training data without strong guidance. This has critical implications for deploying AI in real-world decision-making, from medical diagnoses to economic forecasting.

LLMs might outperform humans in probability-based inference but their reasoning remains fundamentally different—often lacking the intuitive, adaptive logic humans use in everyday problem-solving.

In other words, AI can reason about causality—but not quite like us.


Featured image credit: Kerem Gülen/Ideogram

Tags: AIFeaturedllm

Related Posts

Physicists build and verify a quantum lie detector for large systems

Physicists build and verify a quantum lie detector for large systems

October 8, 2025
Lab breakthrough turns single laser into dozens of data streams on one chip

Lab breakthrough turns single laser into dozens of data streams on one chip

October 8, 2025
Project Paraphrase shows AI can redesign toxins to evade security screening

Project Paraphrase shows AI can redesign toxins to evade security screening

October 8, 2025
AI is now the number one channel for data exfiltration in the enterprise

AI is now the number one channel for data exfiltration in the enterprise

October 8, 2025
Yubico survey: 62% of Gen Z engaged with phishing scams

Yubico survey: 62% of Gen Z engaged with phishing scams

October 6, 2025
High-resolution computer mice can listen to conversations through desk vibrations

High-resolution computer mice can listen to conversations through desk vibrations

October 6, 2025

LATEST NEWS

Microsoft delays Xbox Game Pass price increase for some existing subscribers

Google releases Gemini 2.5 Computer Use model for building UI agents

AI is now the number one channel for data exfiltration in the enterprise

Google expands its AI vibe-coding app Opal to 15 more countries

Google introduces CodeMender, an AI agent for code security

Megabonk once again proves you don’t need fancy graphics to become a hit

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.