in Libraries & AI

The Prior Knowledge Problem

AI doesn’t create the knowledge gap, but it has made it harder to ignore. The people most likely to turn to AI for information are often the same people least equipped to evaluate what it returns. This isn’t a literacy problem that better prompting skills can solve. It’s an epistemic inequality problem, and AI has made it newly urgent and critical. As a librarian, I keep coming back to the fact that we’ve known how to address this kind of problem before. In fact, we built a whole profession around it.

My own reflective AI use actually looks something like this: before opening a platform, I understand the project well enough to know what I actually need. I try my best to stay current enough to know the landscape of available tools, their relative strengths, and the privacy implications of each. Because of this, I can match tool to task because I already understand the task deeply.

During the interaction I bring sources, theorists, and a developing argument to the conversation. I recognize when the AI is being sycophantic, I push back on what I am given, and I know what a reasonable critic of my own idea would say, so I ask for that too. I’m not asking for new information, but rather I’m testing the borders of my own ideas and thinking with a tool.

After an exchange, I read the output against what I already know. I remind myself to spot confident wrong answers. I know the norms and conventions of wherever the thing I’m working on is going to land, and I adapt accordingly.

Everything in that description: the project clarity, the prior sources, the ability to recognize flattery, the evaluative judgment, is something I brought to the task before I opened the platform. That’s not AI literacy, it’s the type of knowledge a person spends a career building.

Now ask who else brings that type of approach to an AI interaction. Probably not the first-generation college student writing a paper on an unfamiliar topic, or the person navigating a health diagnosis without medical training, or the employee asked to use AI tools they’ve had no preparation to use. These are precisely the people most likely to turn to AI for help, and the least equipped to evaluate what comes back.

This isn’t about intelligence or effort. It’s about prior knowledge, and prior knowledge is not evenly distributed. It accumulates through education, professional experience, and access to institutions that build it deliberately over time. The people who have the most of it are also the people who need AI the least.

But what is most challenging at this moment is that AI doesn’t present itself as uncertain. It doesn’t say I’m not sure about this or you might want to check in with a specialist. It answers confidently, with authority. A reader without the prior knowledge to push back has virtually no indication that anything could be wrong. Prompting skills can’t close this gap. That person needs more access to knowledge, tools, and an understanding of how their information landscape is changing.

This is not a new problem. Unequal access to information, unequal ability to evaluate it, unequal understanding of how knowledge is produced and organized are all problems that we have understood for a long time. They’re why we have public libraries, school librarians, academic research support, and the entire infrastructure of information literacy instruction.

We built institutions and a profession around the idea that people shouldn’t have to navigate complex information landscapes alone. That expertise (knowing the tools, knowing the landscape, knowing how to match need to resource, and knowing how to evaluate what comes back) is what librarians do and have always done.

I know this is not simple. Most librarians I talk with are already at or past capacity, and “just do more” isn’t an answer. But I don’t think this is a moment of something new being added on top. Since I came into this field I’ve watched our users move increasingly online, and that shift has always shaped what services we develop, what approaches we prioritize, and what we let go. It has never meant everyone wholesale adopting an entirely new domain overnight. It has meant paying attention to where people are, picking up adjacent skills incrementally, and slowly integrating them until they become just one more tool in our toolkit for working with our community.

And here we are again. The landscape is shifting in ways that directly implicate what we already know how to do. And as William Gibson observed, “the future is already here — it’s just not evenly distributed.” There are librarians already deep in this work, developing new competencies and integrating AI into how they serve their communities. There are others paying cautious attention. Most of us are somewhere along that spectrum. And that is how a profession evolves. The question isn’t whether this is our job. The question is how we develop into it, the same way we always have.


This is a post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.