Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Can AI really reason about cause and effect? A new study puts LLMs to the test

The researchers compared human reasoning with four LLMs—GPT-3.5, GPT-4o, Claude-3, and Gemini-Pro—using collider graphs, a classic test in causal inference

byKerem Gülen
February 17, 2025
in Research
Home Research

A new study from New York University and the University of Tübingen, led by Hanna M. Dettki, Brenden M. Lake, Charley M. Wu, and Bob Rehder, asks whether AI can reason about causes as humans do or if it relies on patterns instead. Their paper, “Do Large Language Models Reason Causally Like Us? Even Better?”, probes four popular models—GPT-3.5, GPT-4o, Claude-3, and Gemini-Pro—to see whether they grasp complex causal structures or merely mimic human language.

How the study tested causal reasoning in AI

The researchers compared human reasoning with four LLMs—GPT-3.5, GPT-4o, Claude-3, and Gemini-Pro—using collider graphs, a classic test in causal inference. Participants (both human and AI) were asked to evaluate the likelihood of an event given certain causal relationships. The core question: do LLMs reason causally in the same way humans do, or do they follow a different logic?


AI now handles molecular simulations: Thanks to MDCrow

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.


Key findings: AI can reason but not like humans

The results revealed a spectrum of causal reasoning among AI models.

  • GPT-4o and Claude-3 showed the most normative reasoning, meaning they followed probability theory more closely than human participants.
  • Gemini-Pro and GPT-3.5, on the other hand, displayed more associative reasoning, meaning they relied more on statistical patterns rather than strict causal logic.
  • All models exhibited biases, deviating from expected independence of causes. However, Claude-3 was the least biased, meaning it adhered most closely to mathematical causal norms.

Interestingly, humans often apply heuristics that deviate from strict probability theory—such as the “explaining away” effect, where observing one cause reduces the likelihood of another. While AI models recognized this effect, their responses varied significantly based on training data and context.

AI vs. human reasoning: A fundamental difference

One of the most intriguing insights from the study is that LLMs don’t just mimic human reasoning—they approach causality differently. Unlike humans, whose judgments remained relatively stable across different contexts, AI models adjusted their reasoning depending on domain knowledge (e.g., economics vs. sociology).

  • GPT-4o, in particular, treated causal links as deterministic, assuming that certain causes always produce specific effects.
  • Humans, by contrast, factor in uncertainty, acknowledging that causal relationships are not always absolute.

This suggests that while AI can be more precise in certain structured tasks, it lacks the flexibility of human thought when dealing with ambiguous or multi-causal situations.

Why this matters for AI in decision-making

The study reveals an important limitation: LLMs may not generalize causal knowledge beyond their training data without strong guidance. This has critical implications for deploying AI in real-world decision-making, from medical diagnoses to economic forecasting.

LLMs might outperform humans in probability-based inference but their reasoning remains fundamentally different—often lacking the intuitive, adaptive logic humans use in everyday problem-solving.

In other words, AI can reason about causality—but not quite like us.


Featured image credit: Kerem Gülen/Ideogram

Tags: AIFeaturedllm

Related Posts

AGI ethics checklist proposes ten key elements

AGI ethics checklist proposes ten key elements

September 11, 2025
Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

September 10, 2025
Uc San Diego study questions phishing training impact

Uc San Diego study questions phishing training impact

September 8, 2025
Deepmind finds RAG limit with fixed-size embeddings

Deepmind finds RAG limit with fixed-size embeddings

September 5, 2025
Psychopathia Machinalis and the path to “Artificial Sanity”

Psychopathia Machinalis and the path to “Artificial Sanity”

September 1, 2025
New research finds AI prefers content from other AIs

New research finds AI prefers content from other AIs

August 29, 2025

LATEST NEWS

UAE’s new K2 Think AI model jailbroken hours after release via transparent reasoning logs

YouTube Music redesigns its Now Playing screen on Android and iOS

EU’s Chat Control proposal will scan your WhatsApp and Signal messages if approved

Apple CarPlay vulnerability leaves vehicles exposed due to slow patch adoption

iPhone Air may spell doomsday for physical SIM cards

Barcelona startup Altan raises $2.5 million to democratize software development with AI agents

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.