Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

AI corrupts academic research with citations of nonexistent studies

Andrew Heiss found that large language models create fake studies that subsequently appear in professional scholarship

byEmre Çıtak
December 26, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Academic institutions have recorded a proliferation of AI-generated citations of nonexistent articles within scholarly publications, undermining research legitimacy, according to Andrew Heiss, an assistant professor at Georgia State University’s Andrew Young School of Policy Studies.

Heiss discovered that large language models (LLMs) are generating fabricated citations, which subsequently appear in professional scholarship. When tracking bogus sources in Google Scholar, Heiss observed dozens of published articles citing variations of these nonexistent studies and journals.

Unlike AI-generated articles, which are often retracted quickly, these hallucinated journal issues are being cited in other papers, effectively legitimizing erroneous information. This process leads students and academics to accept these “sources” as reliable without verifying their authenticity, reinforcing the illusion of credibility through repeated citations.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Research librarians report spending up to 15% of their work hours responding to requests for nonexistent records generated by LLMs like ChatGPT or Google Gemini.

Heiss noted that AI-generated citations often appear convincing, featuring names of living academics and titles resembling existing literature. In some cases, citations linked to actual authors but included fabricated article headings and journal titles that mimicked the authors’ previous work or real periodicals.

Academics, including psychologist Iris van Rooij, have warned that the emergence of AI “slop” in scholarly resources threatens what she termed “the destruction of knowledge.” In July, van Rooij and others signed an open letter advocating for universities to safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity, urging a rigorous analysis of AI’s role in education.

Software engineer Anthony Moser predicted in 2023 that chatbots could lead to instructors creating syllabi with nonexistent readings and students relying on AI to summarize or write essays, a scenario he now states has materialized.

Moser argues that describing LLM outputs as “hallucinations” misrepresents their function, stating that predictive models are “always hallucinating” and are “structurally indifferent to truth.” He said LLMs pollute the information ecosystem upstream, with nonexistent citations infiltrating research and circulating through subsequent papers, likening them to long-lasting chemicals that are difficult to trace or filter.

Moser attributes the problem to “deliberate choices,” claiming objections were “ignored or overruled.” He acknowledges that “bad research isn’t new,” but states LLMs have amplified the preexisting pressure to publish and produce, which led to papers with questionable data.

Craig Callender, a philosophy professor at the University of California San Diego and president of the Philosophy of Science Association, agrees, observing that the “appearance of legitimacy to non-existent journals is like the logical end product of existing trends.” Callender notes the existence of journals accepting spurious articles for profit or biased research, creating a growing “swamp” in scientific publishing. He suggests AI exacerbates this issue, with AI-assisted Google searches potentially reinforcing the perceived existence of these fabricated journals and propagating disinformation.

Researchers report widespread discouragement as fake content becomes enshrined in public research databases, making it difficult to trace the origins of claims.


Featured image credit

Tags: academicAIResearch

Related Posts

Standard AI models fail simple math without specialized training

Standard AI models fail simple math without specialized training

December 30, 2025
Sodium-ion batteries edge closer to fast charging as researchers crack ion bottlenecks

Sodium-ion batteries edge closer to fast charging as researchers crack ion bottlenecks

December 29, 2025
Scientists discover more than 17,000 new species

Scientists discover more than 17,000 new species

December 25, 2025
GPT-5.2 surpasses expert PhD baseline with 92% science score

GPT-5.2 surpasses expert PhD baseline with 92% science score

December 24, 2025
Why DIG AI is the most dangerous malicious AI of 2025

Why DIG AI is the most dangerous malicious AI of 2025

December 23, 2025
Pew Research reveals significant racial gaps in teen AI chatbot usage

Pew Research reveals significant racial gaps in teen AI chatbot usage

December 23, 2025

LATEST NEWS

Xiaomi 17 Ultra’s zoom ring play is normal

Analyst predicts Bitcoin stability over growth for Q1 2026

Stoxx 600 breaks record: European markets hit record high as miners rally

CachyOS challenges Ubuntu in new server benchmarks

HP leaks OMEN OLED gaming monitors ahead of CES 2026

Gallery TV joins LG lifestyle lineup with exclusive art service

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.