The Prior Knowledge Problem

AI doesn’t create the knowledge gap, but it has made it harder to ignore. The people most likely to turn to AI for information are often the same people least equipped to evaluate what it returns. This isn’t a literacy problem that better prompting skills can solve. It’s an epistemic inequality problem, and AI has made it newly urgent and critical. As a librarian, I keep coming back to the fact that we’ve known how to address this kind of problem before. In fact, we built a whole profession around it.

My own reflective AI use actually looks something like this: before opening a platform, I understand the project well enough to know what I actually need. I try my best to stay current enough to know the landscape of available tools, their relative strengths, and the privacy implications of each. Because of this, I can match tool to task because I already understand the task deeply.

During the interaction I bring sources, theorists, and a developing argument to the conversation. I recognize when the AI is being sycophantic, I push back on what I am given, and I know what a reasonable critic of my own idea would say, so I ask for that too. I’m not asking for new information, but rather I’m testing the borders of my own ideas and thinking with a tool.

After an exchange, I read the output against what I already know. I remind myself to spot confident wrong answers. I know the norms and conventions of wherever the thing I’m working on is going to land, and I adapt accordingly.

Everything in that description: the project clarity, the prior sources, the ability to recognize flattery, the evaluative judgment, is something I brought to the task before I opened the platform. That’s not AI literacy, it’s the type of knowledge a person spends a career building.

Now ask who else brings that type of approach to an AI interaction. Probably not the first-generation college student writing a paper on an unfamiliar topic, or the person navigating a health diagnosis without medical training, or the employee asked to use AI tools they’ve had no preparation to use. These are precisely the people most likely to turn to AI for help, and the least equipped to evaluate what comes back.

This isn’t about intelligence or effort. It’s about prior knowledge, and prior knowledge is not evenly distributed. It accumulates through education, professional experience, and access to institutions that build it deliberately over time. The people who have the most of it are also the people who need AI the least.

But what is most challenging at this moment is that AI doesn’t present itself as uncertain. It doesn’t say I’m not sure about this or you might want to check in with a specialist. It answers confidently, with authority. A reader without the prior knowledge to push back has virtually no indication that anything could be wrong. Prompting skills can’t close this gap. That person needs more access to knowledge, tools, and an understanding of how their information landscape is changing.

This is not a new problem. Unequal access to information, unequal ability to evaluate it, unequal understanding of how knowledge is produced and organized are all problems that we have understood for a long time. They’re why we have public libraries, school librarians, academic research support, and the entire infrastructure of information literacy instruction.

We built institutions and a profession around the idea that people shouldn’t have to navigate complex information landscapes alone. That expertise (knowing the tools, knowing the landscape, knowing how to match need to resource, and knowing how to evaluate what comes back) is what librarians do and have always done.

I know this is not simple. Most librarians I talk with are already at or past capacity, and “just do more” isn’t an answer. But I don’t think this is a moment of something new being added on top. Since I came into this field I’ve watched our users move increasingly online, and that shift has always shaped what services we develop, what approaches we prioritize, and what we let go. It has never meant everyone wholesale adopting an entirely new domain overnight. It has meant paying attention to where people are, picking up adjacent skills incrementally, and slowly integrating them until they become just one more tool in our toolkit for working with our community.

And here we are again. The landscape is shifting in ways that directly implicate what we already know how to do. And as William Gibson observed, “the future is already here — it’s just not evenly distributed.” There are librarians already deep in this work, developing new competencies and integrating AI into how they serve their communities. There are others paying cautious attention. Most of us are somewhere along that spectrum. And that is how a profession evolves. The question isn’t whether this is our job. The question is how we develop into it, the same way we always have.


This is a post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.

The Obsolescence Argument Has It Backwards

Everyone seems to agree that artificial intelligence is going to change education, research, and libraries. The disagreement is about direction. The dominant narrative, at least in some technology circles is: AI can find information, synthesize sources, and answer questions. It’s not a surprise that people hearing that argument in media and from tech commentators point out that libraries and librarians do those things and then assume that libraries are in trouble.

But to anyone who sits at the intersection of technology and libraries it’s abundantly clear that AI doesn’t make libraries obsolete, but rather it makes them more essential.


I’ve been thinking about knowledge systems for a long time. My undergraduate degrees were in philosophy and in communication, with a minor in Women’s and Gender Studies, and the questions that animated these fields were the same ones: Who knows? Under what conditions? With what authority, and on whose behalf? Those questions led me to library science, and they’ve shaped how I’ve understood this work ever since.

Two frameworks have always been particularly generative for me. The first is social epistemology. This term was developed by Jesse Shera and Margaret Egan in the mid-twentieth century, which understands libraries not as warehouses of information but as infrastructure for how communities produce and share knowledge. Libraries, in this view, are epistemic institutions. They don’t just store what we know; they shape the conditions under which knowing is possible. (Incidentally social epistemology also developed within Philosophy, with a slightly different implementation, a few decades later.)

The second is feminist epistemology, particularly Donna Haraway’s concept of situated knowledges. Haraway’s argument, made in a landmark 1988 essay, is that all knowledge is produced from somewhere: from a particular body, a particular history, a particular set of social relations. Claims to view-from-nowhere objectivity, what she calls the “god trick,” are not neutral. They are themselves a kind of power move, one that erases the conditions of knowledge production and forecloses accountability. Sandra Harding’s standpoint theory extends this: knowledge produced from the margins, from positions of accountability rather than dominance, is often more comprehensive, not less, because it cannot afford to ignore what the center takes for granted.

These frameworks were developed to critique science. But you can see why I keep coming back to them today.


Large language models perform exactly the god trick Haraway identified. They synthesize at scale without provenance. They produce authoritative-sounding outputs whose origins are opaque, whose training data encodes historical power imbalances, and whose confident tone actively discourages the epistemic humility that good inquiry requires. They are, in Harding’s terms, knowledge produced from nowhere. And this means they are making claims from a position that cannot be held accountable.

This is not primarily a technical problem. It is an epistemic one. And it is precisely the problem that libraries, at their best, are structured to address.

Libraries curate situated knowledge. They preserve provenance. They maintain the bibliographic infrastructure that allows a reader to ask: who said this, when, from what position, in conversation with whom? They select, describe, and organize materials in ways that make the conditions of knowledge production visible rather than erasing them. They employ people (librarians!) whose professional expertise is not only finding information but teaching the critical practices that allow communities to evaluate it.

None of that is replicable by a system that has been specifically designed to flatten those distinctions into fluent prose.


I’m not arguing that AI is useless, or that libraries should resist it, or that the landscape isn’t changing. It is changing, and libraries need to engage with that change thoughtfully and without too much nostalgia. What I am arguing against is the idea that AI supersedes libraries. When someone asks whether AI makes libraries obsolete, the questioner implicitly accepts a definition of libraries as information retrieval systems. That is a definition that was always reductive and is now actively misleading. Libraries are epistemic infrastructure. They are, in Shera and Egan’s terms, the social mechanisms through which communities organize their relationship to knowledge.

AI doesn’t replace that. It creates new urgency for it.

The more our information environment is shaped by systems that perform objectivity while encoding power, the more we need institutions committed to making those dynamics visible. As synthetic text becomes more fluent and authoritative, it will become more important for human thinking to maintain the skills in citation, provenance, critical evaluation, and the slow work of understanding where knowledge comes from. These are the skills that libraries cultivate.

The obsolescence argument has it exactly backwards. This is the moment libraries were built for.


This is the first post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.

Before we begin

Years ago I kept a blog (at this URL, even!) where I thought out loud about libraries, knowledge, and the profession I’d built my career around. I was good at it for a while, and then I wasn’t, and then I stopped for all the usual reasons: changing life phase, less personal time to spend on it, increasingly demanding institutional role, the way the platforms evolved from places of earnest and open discussion… I drifted so far away from blogging and this website that when a back up didn’t capture all the files I wasn’t even all that disappointed.

But lately I’ve really missed thinking in public with other colleagues interested in exploring the same ideas. And lately I’ve been thinking a lot about academic libraries, our information environment, and the ways we talk about and use artificial intelligence.

AI is reshaping how people find, evaluate, and trust information. Within libraries we have people all across the spectrum: from those who fully embrace it to those who believe it has no place near our work. One of the dominant narratives outside of the profession treats libraries as information retrieval systems and concludes that AI makes them redundant. This framing mistakes the symptom for the disease. Libraries are epistemic infrastructure. They are the mechanisms through which communities organize their relationship to knowledge. AI doesn’t replace that, but it does make that role all the more urgent.

This lens keeps coming up for me in conversations in varied spheres. Jesse Shera and Margaret Egan’s social epistemology, which understands libraries not as warehouses but as institutions that shape the conditions under which knowing is possible, is foundational to how I think about this work. So is feminist epistemology, particularly Donna Haraway’s concept of situated knowledges and Sandra Harding’s standpoint theory. These frameworks were built to interrogate science. But it turns out that they are extremely useful when interrogating AI as well.

I’m writing as a person who has spent two decades in academic libraries and who has been thinking about knowledge, power, and institutions since an undergraduate philosophy degree made those questions unavoidable. At this URL, I am not writing as an institutional voice. This is a thinking space. I’m hoping that arguments will develop, get complicated, and occasionally get revised. I expect to adapt to new information.

What follows this post is the first real argument: why the obsolescence narrative has it backwards, and what a clearer account of libraries and knowledge reveals about the epistemic stakes of this moment.

I’m still trying to understand where people talk about these things today. In some ways everything was a lot cleaner when the answer was a blog with open comments, an RSS reader, and Twitter. The messiness of our knowledge environment today (LinkedIn? Bluesky? Mastodon? SubStack? Chat threads? Everywhere?) resonates with the messiness of the information ecosystem I’m trying to write about.