Introduction

Librarians across America are facing an unprecedented challenge that would have seemed like science fiction just a few years ago: patrons are increasingly asking for books that simply don’t exist. These phantom titles aren’t the result of faulty memory or miscommunication, but rather the byproduct of AI systems confidently recommending books they’ve hallucinated into existence.

Reference librarian Eddie Kristan has been fielding these requests since late 2022, but the problem escalated dramatically over the summer when multiple patrons began asking for identical fake book titles. The culprit turned out to be an AI-generated summer reading list that had been published in special editions of major newspapers, including the Chicago Sun-Times and The Philadelphia Inquirer, without any fact-checking of the AI’s recommendations.

This growing phenomenon represents more than just an inconvenience for library staff. It reveals a fundamental problem with how people are beginning to trust AI systems over human expertise, potentially undermining decades of progress in information literacy education.

The Scale of the AI Hallucination Problem

The issue extends far beyond a few confused library patrons. According to Alison Macrina, executive director of Library Freedom Project, early results from a recent survey of librarians indicate that AI tools are fundamentally changing how patrons interact with library services. The survey reveals that patrons are becoming increasingly trusting of their preferred AI tools while simultaneously growing more skeptical of human librarians.

“Librarians are reporting this overall atmosphere of confusion and lack of trust they’re experiencing from their patrons,” Macrina explains. “They’re seeing patrons having seemingly diminished critical thinking and curiosity. They’re definitely running into some of these psychosis and other mental health issues, and certainly seeing the people who are more widely adopting AI also being those who have less digital literacy about it.”

The problem manifests in multiple ways. Librarians report being treated more like robots in online reference chats, with patrons expecting instant, definitive answers rather than engaging in the traditional collaborative research process. When librarians explain that an AI-recommended book doesn’t exist, some patrons become defensive, trusting their AI source over professional librarians with years of training in information science.

Kristan has developed a systematic approach to handle these requests. First, he searches the library’s catalog. If the item isn’t there, he checks WorldCat, the global library catalog. If a book isn’t in WorldCat but claims to be a traditional publication, it raises immediate red flags. “Not being in WorldCat might mean it’s something that isn’t catalogued like a zine, a broadcast, or something ephemeral, but if it’s parading as a traditional book and doesn’t have an entry in the collective library catalog, it might be AI,” he explains.

The Broader Impact on Information Ecosystems

The infiltration of AI-generated content into libraries represents a microcosm of a larger problem plaguing information systems worldwide. Companies rushing AI products to market have created tools that confidently present false information, and these tools are being used by content creators who often don’t fact-check the output before publication.

The summer reading list incident perfectly illustrates this chain of misinformation. A freelancer used AI to generate book recommendations, published them without verification, and major newspapers distributed the list to thousands of readers. The result was multiple library patrons requesting the same non-existent titles from real authors, creating confusion and extra work for library staff across multiple locations.

This isn’t limited to book recommendations. AI systems are hallucinating everything from academic papers to news articles to historical events. The problem is compounded by the fact that these hallucinations often seem plausible, mixing real elements with fictional ones in convincing ways.

Academic librarian Jaime Taylor from the University of Massachusetts has observed how vendors are rushing to implement AI features in library systems, often with poor results. She describes two main types of problematic AI integration: natural language search systems that claim to understand user intent but don’t actually work better than traditional Boolean searches, and AI-generated summaries that often provide inaccurate information by incorrectly parsing source materials.

The Technology Behind the Problem

Understanding why AI systems hallucinate helps explain why this problem is so persistent. Large language models like ChatGPT are trained to predict the most likely next word in a sequence based on patterns in their training data. They don’t actually “know” whether information is true or false; they’re simply generating text that seems statistically plausible based on their training.

When an AI system is asked for book recommendations, it might combine elements it has seen before: real author names, common book themes, and typical title structures. The result can be a perfectly plausible-sounding book that never actually existed. The AI presents this information with the same confidence it would use for factual information, making it difficult for users to distinguish between real and hallucinated content.

This fundamental limitation becomes particularly problematic when AI tools are used for information discovery and research. Unlike search engines that point to existing web content, AI systems generate new text that may or may not correspond to reality. Users often don’t understand this distinction, treating AI outputs as authoritative information rather than as starting points that require verification.

The problem is exacerbated by the competitive pressure on AI companies to make their systems seem more helpful and comprehensive. Systems that admit uncertainty or frequently say “I don’t know” may seem less useful to users, creating incentives for AI developers to train systems that provide confident-sounding answers even when uncertain.

Fighting Back Against Digital Misinformation

Libraries and librarians are adapting to this new reality in various ways. Collection development librarians are working with digital book vendors like OverDrive, Hoopla, and CloudLibrary to identify and remove AI-generated content that has made its way into their catalogs. This is often a manual process that requires subject specialists to vet titles without having to read every single book.

Some librarians are turning off AI features in their systems when possible. Taylor notes that while she can currently disable some AI tools, she expects vendors will make this increasingly difficult as they seek to demonstrate usage statistics that justify their AI investments. “We are trying to teach how to construct useful, exact searching,” she explains. “But really, these products’ intent is to make that not happen.”

The situation presents a particular challenge for academic libraries trying to teach research skills. Students are simultaneously being exposed to AI tools that promise easy answers and library systems that require more sophisticated search strategies. When the AI tools provide poor results and students haven’t learned traditional research skills, they end up with “crap results” and no path to improvement.

Libraries are also implementing new educational programs focused on AI literacy. These programs teach patrons how to evaluate AI-generated content, understand the limitations of AI systems, and verify information through multiple sources. However, this education competes with the marketing messages from tech companies that position AI as infallible and comprehensive.

The Human Cost of Automated Misinformation

Beyond the technical and procedural challenges, there’s a human element to this crisis that’s often overlooked. Librarians entered their profession to help people find accurate information and develop critical thinking skills. The proliferation of AI-generated misinformation undermines these core professional values while creating additional work and stress for library staff.

Kristan describes the frustration of spending time helping patrons understand that their AI source was wrong, time that could have been spent on more productive research assistance. “It’s really, really frustrating, and it’s really setting us back as far as the community’s info literacy,” he notes.

The problem also reflects broader societal trends around the devaluation of expertise. When people trust AI systems over trained professionals, it represents more than just a technology adoption issue. It suggests a fundamental shift in how society values human knowledge and professional judgment.

For library patrons, the confusion created by AI hallucinations can be genuinely distressing. Someone looking for a book that was recommended to them may feel frustrated or embarrassed when told it doesn’t exist. They may question their own memory or feel misled by technology they trusted.

Looking Toward Solutions

Addressing the AI hallucination problem in libraries requires action at multiple levels. AI companies need to be more transparent about the limitations of their systems and implement better safeguards against confident-sounding misinformation. Content creators and publishers need to fact-check AI-generated content before publication. And users need better education about how AI systems work and when to be skeptical of their outputs.

Some technical solutions are being developed. AI systems could be designed to express uncertainty more clearly, flagging when they’re generating speculative content rather than recalling factual information. Better integration with verified databases could help AI systems distinguish between real and imagined references. Machine learning techniques are also being explored to help AI systems recognize when they’re likely to be hallucinating.

However, technical fixes alone won’t solve the problem. The rise of AI-generated phantom books in libraries reflects deeper issues about information literacy, critical thinking, and the relationship between technology and expertise. Addressing these challenges requires coordinated efforts from educators, technologists, librarians, and policymakers. Libraries are uniquely positioned to lead this effort. As institutions dedicated to information access and literacy, they have both the expertise and the public trust necessary to help communities navigate the age of AI-generated content. Many libraries are already expanding their digital literacy programs to include AI literacy, teaching patrons not just how to use these tools, but how to evaluate their outputs critically.

Conclusion

The phantom book phenomenon represents a canary in the coal mine for the broader challenges society faces as AI systems become more prevalent and sophisticated. While the immediate problem of non-existent book requests may seem minor, it reveals fundamental tensions between human expertise and artificial intelligence, between efficiency and accuracy, and between technological capability and user understanding.

The solution isn’t to reject AI technology entirely, but rather to develop more thoughtful approaches to its integration into information systems. This means designing AI tools that are honest about their limitations, training users to be more critical consumers of AI-generated content, and preserving space for human expertise and judgment in our information ecosystems. Libraries have always served as guardians of accurate information and champions of critical thinking. As they adapt to the challenges posed by AI hallucinations, they’re not just solving an immediate operational problem—they’re helping to define what responsible AI integration looks like in public institutions. Their success or failure in this effort may well determine whether we can maintain trust in our information systems as artificial intelligence becomes increasingly ubiquitous. The phantom books flooding library requests today may be just the beginning. But if librarians, technologists, and communities work together to address these challenges thoughtfully, we can ensure that the age of AI enhances rather than undermines our collective ability to find, evaluate, and share reliable information. The future of information literacy—and perhaps democracy itself—may depend on getting this balance right.