What Libraries Actually Do

One of the biggest challenges for libraries telling their story is that the library means something different depending on who’s in the room. For a student, it’s a place to study and a staff full of people who want them to succeed. For a faculty member, it might be the invisible infrastructure that delivers electronic articles or the subject liaison who visits their class each year. For a graduate student, it’s often something more: a research partner, a data collaborator, a guide through the methodological expectations of a discipline. And that’s before accounting for the larger information environment that those users are already swimming in, where disciplines publish at high rates; trade publications and newsletters and Substacks produce sense-making content daily; and social media, messaging apps, and community channels add still more. The information environment is a flood around us, and the library is one stream of that flow.

Because of this, and for my entire career, I’ve started from the question of what the typical user actually does when looking for information. That question should shape how we structure the materials we license and steward, how we design services that meet patron behavior rather than assume it, and how we think about instruction. It was the right question when search engines were starting to work well, when Wikipedia launched, and it remains the right question now.

Which is why I keep returning to Margaret Egan and Jesse Shera. They weren’t describing what libraries do. They were describing what knowledge requires, and they built that argument from inside library science. That distinction matters enormously right now, because it means the epistemic infrastructure libraries provide isn’t only a feature of libraries as institutions. It’s a feature of how communities actually come to know things.

When Shera and Egan introduced the term “social epistemology” in 1952, they were writing at a particular moment in information science history, when the field was working to establish its intellectual legitimacy. They were pushing against a narrower, technically retrieval-based conception of the field, arguing that the social dimension wasn’t supplementary to the epistemic function but constitutive of it. “Social” in this sense doesn’t mean communal or community-oriented in a casual way. What they meant was structural: knowledge is not produced by individual minds in isolation and then deposited into libraries for safekeeping. It is produced through systems of validation, circulation, critique, and preservation, and libraries are part of the infrastructure that makes those systems work. Shera would return to and refine this framework across the following decades, and the thread runs forward through Wilson’s work on cognitive authority, through Chatman’s research on information poverty, and eventually into the ACRL Information Literacy Framework, even when the explicit vocabulary of social epistemology wasn’t used. The social is epistemic.

I’ve been thinking about this framework for over twenty years, and I keep returning to the same question: why didn’t it take over the profession? The framework was there in 1952. Information literacy has been moving toward it for decades, most recently with the ACRL Information Literacy Framework document. And yet the dominant self-description of libraries remained access-delivery-focused for most of that period. Part of this, I suspect, is a cultural problem. Whenever I talk with someone about libraries, they want to reminisce about the last one they used in any significant way, which means the conversation often starts with card catalogs, or surprise that students can eat in the library now, or wondering where all the books went. When people carry such varying, yet book-based, memories, it’s hard to talk about where libraries are going without first establishing where they actually are. And when the people you’re trying to reach are administrators facing their own funding pressures and a desire for metrics, the epistemic argument can feel harder to make than an argument based in easy-to-report circulation counts.

What does it mean to be epistemic infrastructure rather than an information warehouse? The warehouse metaphor, which was easy to count in gate entries and items circulated, treats knowledge as something that exists prior to the library and gets stored there. The infrastructure metaphor treats the library as part of what makes certain kinds of knowing possible at all, not a convenience for accessing knowledge that would exist regardless, but a condition for the scholarly practices through which knowledge gets produced, validated, and preserved. The metaphor here is road versus car. The road doesn’t move the car. But without it, the car doesn’t go anywhere useful. The library is the road. But it might be even more accurate to say it’s the whole Department of Transportation, responsible not just for the surface you drive on but for whether the road reaches your neighborhood at all.

This is the frame through which I think the current AI moment becomes legible. The proliferation of AI tools in research and information seeking isn’t asking libraries to become something new. It’s asking libraries to be more fully what Shera and Egan described seven decades ago. As AI systems become embedded in how people search, synthesize, and evaluate information, the question of what epistemic infrastructure exists to support genuine knowing becomes more urgent, not less. The road matters more when the vehicles are faster and harder to steer. Libraries that understand themselves as epistemic infrastructure, as systems that make certain kinds of knowing possible for their communities, are positioned to do that work. Libraries that understand themselves primarily as access points to content are in a harder position to articulate why they matter when access has become frictionless and ubiquitous.

For the librarians reading this: this is why the work you’re already doing is philosophically serious, not just practically useful. For the administrators and institutional leaders in the room: understanding libraries as epistemic infrastructure changes what decisions about them actually mean.


This is a post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.

Thinking in Public, Again

When I started blogging, I was in library school. I continued to blog into my first librarian position and into my first managerial role. I shifted away from rough-draft public thinking and into more formal presentations and service work as I moved further into administration. For one I was learning a new organizational culture and didn’t want to represent my team poorly. But I was also aware that the hierarchical asymmetry meant people within my organization might not feel they could easily push back which meant I might not know if something landed incorrectly. Given my interest in social epistemology and understanding of how positionality shapes the feedback you receive it felt that the thinking in public was in tension with my institutional role. The cost I felt from that shift wasn’t just reputational caution, I lost touch with the writing practice that kept me engaged in the intellectual work.

And now that I’m back at Wake Forest, with my feet under me, it felt like the right context to try again. This is a smaller organization in which I can know everyone that works in the library, some of whom remember my earlier blogging practice. I know the University very well, across decades. I also think that there is enough trust that people would tell me (or they would tell someone who could tell me) if something I wrote made their lives harder.

I started this blog because I had accumulated a lot of thoughts about AI and librarianship: threads of ideas that I hadn’t yet woven together. I thought I could write about them publicly, hear responses, evolve my thinking, and find people interested in the same slice of the issue I can’t stop thinking about. If it went well, maybe the content would find its way to an article or a presentation.

I’ve been at it long enough now that I’m beginning to remember what blogging meant to me professionally. Ideas lead to more ideas. Each time I post something I find three more tangents that I want to pin down. I’m reading less out of obligation and more because something might connect to what I’m already thinking about. Giving myself permission to write is also giving myself permission to think seriously about things that don’t have an obvious deliverable, and it’s also a way to give the people I work with a window into that thinking if they want one. I’ve worked in enough organizations in which I wanted to know a bit more about how people in leadership were thinking about issues that were emergent that I know it can be helpful to have access to some of that interior process.

So the practice is good for me whether or not it finds a reader, though I do have a reader in mind. This blog is my attempt to translate the critical AI literature for people with institutional power to act on it: whether that’s a librarian working directly with a patron, or a dean, or a provost making decisions about how their institution engages with these tools. There are thoughtful people focused on the practical dimensions of AI use, and equally thoughtful people working on the theoretical and critical dimensions. I find myself most interested in bridging those two areas. The critical literature exists, the practitioner literature exists, and there’s a space between them, and that’s where I’m trying to work. And that’s shown up across posts, whether it’s exploring how AI’s authoritative tone obscures what it returns, or how a broad view of information literacy points the way towards what AI literacy may be.

Most people doing this work have never needed the word “epistemic” to do librarianship well, and that’s exactly the point. The framework isn’t new vocabulary for new work. It’s a way of seeing what was already there. That’s what’s happening here: naming what librarians have always known how to do, so that knowledge doesn’t get lost in the moment that most needs it.

The Prior Knowledge Problem

AI doesn’t create the knowledge gap, but it has made it harder to ignore. The people most likely to turn to AI for information are often the same people least equipped to evaluate what it returns. This isn’t a literacy problem that better prompting skills can solve. It’s an epistemic inequality problem, and AI has made it newly urgent and critical. As a librarian, I keep coming back to the fact that we’ve known how to address this kind of problem before. In fact, we built a whole profession around it.

My own reflective AI use actually looks something like this: before opening a platform, I understand the project well enough to know what I actually need. I try my best to stay current enough to know the landscape of available tools, their relative strengths, and the privacy implications of each. Because of this, I can match tool to task because I already understand the task deeply.

During the interaction I bring sources, theorists, and a developing argument to the conversation. I recognize when the AI is being sycophantic, I push back on what I am given, and I know what a reasonable critic of my own idea would say, so I ask for that too. I’m not asking for new information, but rather I’m testing the borders of my own ideas and thinking with a tool.

After an exchange, I read the output against what I already know. I remind myself to spot confident wrong answers. I know the norms and conventions of wherever the thing I’m working on is going to land, and I adapt accordingly.

Everything in that description: the project clarity, the prior sources, the ability to recognize flattery, the evaluative judgment, is something I brought to the task before I opened the platform. That’s not AI literacy, it’s the type of knowledge a person spends a career building.

Now ask who else brings that type of approach to an AI interaction. Probably not the first-generation college student writing a paper on an unfamiliar topic, or the person navigating a health diagnosis without medical training, or the employee asked to use AI tools they’ve had no preparation to use. These are precisely the people most likely to turn to AI for help, and the least equipped to evaluate what comes back.

This isn’t about intelligence or effort. It’s about prior knowledge, and prior knowledge is not evenly distributed. It accumulates through education, professional experience, and access to institutions that build it deliberately over time. The people who have the most of it are also the people who need AI the least.

But what is most challenging at this moment is that AI doesn’t present itself as uncertain. It doesn’t say I’m not sure about this or you might want to check in with a specialist. It answers confidently, with authority. A reader without the prior knowledge to push back has virtually no indication that anything could be wrong. Prompting skills can’t close this gap. That person needs more access to knowledge, tools, and an understanding of how their information landscape is changing.

This is not a new problem. Unequal access to information, unequal ability to evaluate it, unequal understanding of how knowledge is produced and organized are all problems that we have understood for a long time. They’re why we have public libraries, school librarians, academic research support, and the entire infrastructure of information literacy instruction.

We built institutions and a profession around the idea that people shouldn’t have to navigate complex information landscapes alone. That expertise (knowing the tools, knowing the landscape, knowing how to match need to resource, and knowing how to evaluate what comes back) is what librarians do and have always done.

I know this is not simple. Most librarians I talk with are already at or past capacity, and “just do more” isn’t an answer. But I don’t think this is a moment of something new being added on top. Since I came into this field I’ve watched our users move increasingly online, and that shift has always shaped what services we develop, what approaches we prioritize, and what we let go. It has never meant everyone wholesale adopting an entirely new domain overnight. It has meant paying attention to where people are, picking up adjacent skills incrementally, and slowly integrating them until they become just one more tool in our toolkit for working with our community.

And here we are again. The landscape is shifting in ways that directly implicate what we already know how to do. And as William Gibson observed, “the future is already here — it’s just not evenly distributed.” There are librarians already deep in this work, developing new competencies and integrating AI into how they serve their communities. There are others paying cautious attention. Most of us are somewhere along that spectrum. And that is how a profession evolves. The question isn’t whether this is our job. The question is how we develop into it, the same way we always have.


This is a post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.

The Knowledge We’ve Always Built

There’s something genuinely strange about this moment for libraries. The tools that seem most likely to make us obsolete are also the ones that most clearly reveal what we were doing all along. More information is available than ever before, synthesized, immediate, apparently authoritative. And yet the questions that matter most are only getting harder to answer. What’s trustworthy? How do you know? Who decides? Libraries have been working on those questions for a long time. We just didn’t always have to say so out loud.

Information vs. Knowledge

When I hear excitement about AI, it’s almost always about access to information. And access to information is genuinely useful. But having spent a lot of time with the data-information-knowledge-wisdom framework1, I’m aware that information and knowledge aren’t the same thing. Moving from one to the other requires context: understanding the nuance of what you’re seeing, where it came from, and how it fits into what you already know.

Libraries are centered around exactly that work. We pay attention to publishers, to trends in a literature, to publication types. We help students understand why a publication date matters, whether a study is quantitative or qualitative, how to evaluate whether a source actually supports the argument they’re building.

Outcomes not metrics

Libraries have a long history of understanding the need to demonstrate their value. One place we turn are the metrics and statistics we can share with stakeholders to prove the community benefits from their library. We count the number of items in our collection, the number of people through the door, the number of reference consultations we provide, and the number of classes we teach. Those numbers were the right answer to the questions we were being asked, but AI is changing the questions.

AI clarifies this for us. When information is available from anywhere, talking about access becomes less useful. We have to say something truer about what we actually do, and that means recovering language for work we’ve been doing all along.

Libraries were never only about access. They were about the social infrastructure of knowledge: the systems through which communities come to know things together, evaluate what’s trustworthy, and preserve the conditions for doing that work well. Margaret Egan and Jesse Shera understood this in the 1950s2. The ACRL Framework for Information Literacy, with its insistence that “authority is constructed and contextual,” is evidence that the profession has been moving toward that understanding for years. AI didn’t create this argument. It just made it impossible to avoid.

This is where the library’s expertise becomes irreplaceable, and exactly the area that is at risk when people accept AI outputs without understanding what’s missing. Librarians learn context not as background information but as the substance of the work. Understanding how knowledge is produced: the research process, peer review, publication venues, the difference between a preprint and a published study, is what makes it possible to build collections worth preserving, and to help students and faculty find not just information but validated knowledge they can actually build on.

Load bearing shoulders

In trying to find a theory or framework to describe the importance of scaffolding knowledge and the expertise that librarians bring to this work, I keep being drawn to constructivism. These days I keep coming back to it as useful framing for how scholarship itself works.

Constructivism holds that knowledge builds on existing knowledge, and research articles are grounded in literature reviews, citations, and peer review. It describes knowledge as socially constructed through dialogue, which happens in research as the ACRL Framework describes “scholarship as conversation.” It requires authentic context, which is exactly what AI strips out. And it expects active engagement with ideas, not passive receipt of synthesized outputs.

You can only stand on the shoulders of giants if someone has been paying attention to which shoulders are load-bearing. And there is an entire profession doing exactly that work.

In all the AI discourse I continue to think about what it means for librarianship. I know that we will always be in the business of access to information. But I can’t help believing we’ll shift towards centering knowledge in the future, and I am thinking about what that might mean for the work. I’m curious what you see from your position in the field.

  1. Ackoff, Russell (1989). “From Data to Wisdom.” Journal of Applied Systems Analysis. 16: 3–9. ↩︎
  2. Egan, Margaret E. and Shera, Jesse H. (1952). “Foundations of a Theory of Bibliography.” The Library Quarterly. 22.2: 125–137. ↩︎

This is a post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.

The Middle of the AI Conversation Is Where the Work Is

There are two camps in most AI conversations, and by now you probably know which one is yours. In one camp: the enthusiasts, the builders, the people who are busy using the technology so regularly that they barely have time to read and reflect on the critiques. In the other: the scholars and critics who have built real intellectual credibility on careful distance from hype, for whom deep hands-on engagement with the tools has become something like an ethical compromise, a sign of capture, naivety, or insufficient rigor.

This divide isn’t only a values disagreement: It’s also, and maybe more importantly, a structural one. Both camps are operating under real social and professional constraints that make crossing over genuinely costly. The enthusiasts aren’t ignoring the critique literature because they’re incurious. They’re busy building with and testing the tools. The critics aren’t avoiding the technology because they’re afraid of what they’d find. Spending time experimenting with the tools risks their credibility with the people whose respect they’ve earned. Neither camp is wrong about their own situation. The incentive structures just happen to produce a conversation where the people with the most exposure to the technology’s actual affordances aren’t positioned to think carefully about what’s at stake, and the people who are positioned to do that thinking aren’t getting sustained hands-on experience. Both kinds of knowledge exist. They’re just not in the same room or social media threads.

The discourse looks different face to face. Online, the two camps dominate. In actual conversation I find far more people who are genuinely uncertain, watching, and waiting. They’re quieter, because holding the question open doesn’t perform as well as certainty in most public discourse. But those waiting and watching are having experiences that shape their perception of these tools. What this middle group needs isn’t more evidence: they’ve seen the studies and the horror stories. What they need is language for the tension they’re already living. They are looking for a framework that doesn’t ask them to resolve the complexity before they’re ready, and one that that lets them continue to learn from the middle.

Libraries can’t afford to pick a side, though it seems that many have. Some rushed toward enthusiasm, repositioning themselves as AI integration hubs before anyone had language for what was actually changing. Others have planted flags in critique, which is intellectually serious but leaves them unable to advise the faculty member who just needs to know what to do with their upcoming research assignment.

I have conversations regularly with people who ask me, without malice, whether libraries will exist in a world with AI. It’s a real question and it deserves a considered answer. But it’s only possible to answer it from the middle because the answer requires understanding what the technology actually does and doesn’t do, and being able to name what’s genuinely at stake when information ecosystems shift. Neither camp alone gets you there.

When you’re paying attention from the right position, the gaps become visible: the things AI doesn’t do, or can’t do, or shouldn’t do, or that a person still needs to do. Much of my career has been based on finding gaps and doing the work inside them. That instinct isn’t a survival strategy. It’s just what happens when you’re paying close enough attention. And we’ll all need to take that approach with AI.


This is a post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.