Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Study shows AI summaries kill motivation to check sources

Participants using ChatGPT perceived they learned less than those using Google Search.

byAytun Çelebi
December 2, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Relying on large language models (LLMs) to summarize information may diminish knowledge acquisition, according to a recent study involving over 10,000 participants.

Marketing professors Jin Ho Yun and Shiri Melumad co-authored a paper detailing this finding across seven studies. Participants were tasked with learning a topic, such as vegetable gardening, through either an LLM like ChatGPT or a standard Google search. Researchers placed no restrictions on tool usage duration or interaction for participants.

Participants subsequently wrote advice for a friend based on their learned information. Data consistently showed those who used LLMs for learning perceived they learned less and invested less effort in advice creation. Their advice was shorter, less factual, and more generic.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

An independent sample of readers found the LLM-derived advice less informative, less helpful, and were less likely to adopt it. These differences persisted across various contexts.

One experiment controlled for potential variations in information eclecticism by exposing participants to identical facts from both Google and ChatGPT searches. Another experiment held the search platform constant—Google—while varying whether participants learned from standard Google results or Google’s AI Overview feature. Even with facts and platform standardized, learning from synthesized LLM responses resulted in shallower knowledge compared to gathering, interpreting, and synthesizing information via standard web links.

The study attributes this diminished learning to reduced active engagement. Google searches involve more “friction,” requiring navigation, reading, interpretation, and synthesis of various web links, which fosters deeper mental representation. LLMs perform this process for the user, shifting learning from active to passive.

Researchers do not advocate for avoiding LLMs given their benefits in other contexts. Instead, they suggest users become more strategic by understanding where LLMs are beneficial or harmful to their goals. For quick, factual answers, LLMs are suitable. However, for developing deep, generalizable knowledge, relying solely on LLM syntheses is less effective.

Further experimentation involved a specialized GPT model providing real-time web links alongside synthesized responses. Participants receiving an LLM summary were not motivated to explore original sources, leading to shallower knowledge compared to those using standard Google. Future research will explore generative AI tools that introduce “healthy frictions” to encourage active learning beyond easily synthesized answers, particularly in secondary education.


Featured image credit

Tags: AIResearch

Related Posts

OpenAI wants its AI to confess to hacking and breaking rules

OpenAI wants its AI to confess to hacking and breaking rules

December 4, 2025
MIT: AI capability outpaces current adoption by five times

MIT: AI capability outpaces current adoption by five times

December 2, 2025
Study finds poetry bypasses AI safety filters 62% of time

Study finds poetry bypasses AI safety filters 62% of time

December 1, 2025
Stanford’s Evo AI designs novel proteins using genomic language models

Stanford’s Evo AI designs novel proteins using genomic language models

December 1, 2025
Your future quantum computer might be built on standard silicon after all

Your future quantum computer might be built on standard silicon after all

November 25, 2025
Microsoft’s Fara-7B: New agentic LLM from screenshots

Microsoft’s Fara-7B: New agentic LLM from screenshots

November 25, 2025

LATEST NEWS

Leaked: Xiaomi 17 Ultra has 200MP periscope camera

Leak reveals Samsung EP-P2900 25W magnetic charging dock

Kobo quietly updates Libra Colour with larger 2,300 mAh battery

Google Discover tests AI headlines that rewrite news with errors

TikTok rolls out location-based Nearby Feed

Meta claims AI reduced hacks by 30% as it revamps support tools

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.