Academic institutions have recorded a proliferation of AI-generated citations of nonexistent articles within scholarly publications, undermining research legitimacy, according to Andrew Heiss, an assistant professor at Georgia State University’s Andrew Young School of Policy Studies.
Heiss discovered that large language models (LLMs) are generating fabricated citations, which subsequently appear in professional scholarship. When tracking bogus sources in Google Scholar, Heiss observed dozens of published articles citing variations of these nonexistent studies and journals.
Unlike AI-generated articles, which are often retracted quickly, these hallucinated journal issues are being cited in other papers, effectively legitimizing erroneous information. This process leads students and academics to accept these “sources” as reliable without verifying their authenticity, reinforcing the illusion of credibility through repeated citations.
Research librarians report spending up to 15% of their work hours responding to requests for nonexistent records generated by LLMs like ChatGPT or Google Gemini.
Heiss noted that AI-generated citations often appear convincing, featuring names of living academics and titles resembling existing literature. In some cases, citations linked to actual authors but included fabricated article headings and journal titles that mimicked the authors’ previous work or real periodicals.
Academics, including psychologist Iris van Rooij, have warned that the emergence of AI “slop” in scholarly resources threatens what she termed “the destruction of knowledge.” In July, van Rooij and others signed an open letter advocating for universities to safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity, urging a rigorous analysis of AI’s role in education.
Software engineer Anthony Moser predicted in 2023 that chatbots could lead to instructors creating syllabi with nonexistent readings and students relying on AI to summarize or write essays, a scenario he now states has materialized.
Moser argues that describing LLM outputs as “hallucinations” misrepresents their function, stating that predictive models are “always hallucinating” and are “structurally indifferent to truth.” He said LLMs pollute the information ecosystem upstream, with nonexistent citations infiltrating research and circulating through subsequent papers, likening them to long-lasting chemicals that are difficult to trace or filter.
Moser attributes the problem to “deliberate choices,” claiming objections were “ignored or overruled.” He acknowledges that “bad research isn’t new,” but states LLMs have amplified the preexisting pressure to publish and produce, which led to papers with questionable data.
Craig Callender, a philosophy professor at the University of California San Diego and president of the Philosophy of Science Association, agrees, observing that the “appearance of legitimacy to non-existent journals is like the logical end product of existing trends.” Callender notes the existence of journals accepting spurious articles for profit or biased research, creating a growing “swamp” in scientific publishing. He suggests AI exacerbates this issue, with AI-assisted Google searches potentially reinforcing the perceived existence of these fabricated journals and propagating disinformation.
Researchers report widespread discouragement as fake content becomes enshrined in public research databases, making it difficult to trace the origins of claims.





