The Knowledge We’ve Always Built

There’s something genuinely strange about this moment for libraries. The tools that seem most likely to make us obsolete are also the ones that most clearly reveal what we were doing all along. More information is available than ever before, synthesized, immediate, apparently authoritative. And yet the questions that matter most are only getting harder to answer. What’s trustworthy? How do you know? Who decides? Libraries have been working on those questions for a long time. We just didn’t always have to say so out loud.

Information vs. Knowledge

When I hear excitement about AI, it’s almost always about access to information. And access to information is genuinely useful. But having spent a lot of time with the data-information-knowledge-wisdom framework1, I’m aware that information and knowledge aren’t the same thing. Moving from one to the other requires context: understanding the nuance of what you’re seeing, where it came from, and how it fits into what you already know.

Libraries are centered around exactly that work. We pay attention to publishers, to trends in a literature, to publication types. We help students understand why a publication date matters, whether a study is quantitative or qualitative, how to evaluate whether a source actually supports the argument they’re building.

Outcomes not metrics

Libraries have a long history of understanding the need to demonstrate their value. One place we turn are the metrics and statistics we can share with stakeholders to prove the community benefits from their library. We count the number of items in our collection, the number of people through the door, the number of reference consultations we provide, and the number of classes we teach. Those numbers were the right answer to the questions we were being asked, but AI is changing the questions.

AI clarifies this for us. When information is available from anywhere, talking about access becomes less useful. We have to say something truer about what we actually do, and that means recovering language for work we’ve been doing all along.

Libraries were never only about access. They were about the social infrastructure of knowledge: the systems through which communities come to know things together, evaluate what’s trustworthy, and preserve the conditions for doing that work well. Margaret Egan and Jesse Shera understood this in the 1950s2. The ACRL Framework for Information Literacy, with its insistence that “authority is constructed and contextual,” is evidence that the profession has been moving toward that understanding for years. AI didn’t create this argument. It just made it impossible to avoid.

This is where the library’s expertise becomes irreplaceable, and exactly the area that is at risk when people accept AI outputs without understanding what’s missing. Librarians learn context not as background information but as the substance of the work. Understanding how knowledge is produced: the research process, peer review, publication venues, the difference between a preprint and a published study, is what makes it possible to build collections worth preserving, and to help students and faculty find not just information but validated knowledge they can actually build on.

Load bearing shoulders

In trying to find a theory or framework to describe the importance of scaffolding knowledge and the expertise that librarians bring to this work, I keep being drawn to constructivism. These days I keep coming back to it as useful framing for how scholarship itself works.

Constructivism holds that knowledge builds on existing knowledge, and research articles are grounded in literature reviews, citations, and peer review. It describes knowledge as socially constructed through dialogue, which happens in research as the ACRL Framework describes “scholarship as conversation.” It requires authentic context, which is exactly what AI strips out. And it expects active engagement with ideas, not passive receipt of synthesized outputs.

You can only stand on the shoulders of giants if someone has been paying attention to which shoulders are load-bearing. And there is an entire profession doing exactly that work.

In all the AI discourse I continue to think about what it means for librarianship. I know that we will always be in the business of access to information. But I can’t help believing we’ll shift towards centering knowledge in the future, and I am thinking about what that might mean for the work. I’m curious what you see from your position in the field.

  1. Ackoff, Russell (1989). “From Data to Wisdom.” Journal of Applied Systems Analysis. 16: 3–9. ↩︎
  2. Egan, Margaret E. and Shera, Jesse H. (1952). “Foundations of a Theory of Bibliography.” The Library Quarterly. 22.2: 125–137. ↩︎

This is a post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.

The Middle of the AI Conversation Is Where the Work Is

There are two camps in most AI conversations, and by now you probably know which one is yours. In one camp: the enthusiasts, the builders, the people who are busy using the technology so regularly that they barely have time to read and reflect on the critiques. In the other: the scholars and critics who have built real intellectual credibility on careful distance from hype, for whom deep hands-on engagement with the tools has become something like an ethical compromise, a sign of capture, naivety, or insufficient rigor.

This divide isn’t only a values disagreement: It’s also, and maybe more importantly, a structural one. Both camps are operating under real social and professional constraints that make crossing over genuinely costly. The enthusiasts aren’t ignoring the critique literature because they’re incurious. They’re busy building with and testing the tools. The critics aren’t avoiding the technology because they’re afraid of what they’d find. Spending time experimenting with the tools risks their credibility with the people whose respect they’ve earned. Neither camp is wrong about their own situation. The incentive structures just happen to produce a conversation where the people with the most exposure to the technology’s actual affordances aren’t positioned to think carefully about what’s at stake, and the people who are positioned to do that thinking aren’t getting sustained hands-on experience. Both kinds of knowledge exist. They’re just not in the same room or social media threads.

The discourse looks different face to face. Online, the two camps dominate. In actual conversation I find far more people who are genuinely uncertain, watching, and waiting. They’re quieter, because holding the question open doesn’t perform as well as certainty in most public discourse. But those waiting and watching are having experiences that shape their perception of these tools. What this middle group needs isn’t more evidence: they’ve seen the studies and the horror stories. What they need is language for the tension they’re already living. They are looking for a framework that doesn’t ask them to resolve the complexity before they’re ready, and one that that lets them continue to learn from the middle.

Libraries can’t afford to pick a side, though it seems that many have. Some rushed toward enthusiasm, repositioning themselves as AI integration hubs before anyone had language for what was actually changing. Others have planted flags in critique, which is intellectually serious but leaves them unable to advise the faculty member who just needs to know what to do with their upcoming research assignment.

I have conversations regularly with people who ask me, without malice, whether libraries will exist in a world with AI. It’s a real question and it deserves a considered answer. But it’s only possible to answer it from the middle because the answer requires understanding what the technology actually does and doesn’t do, and being able to name what’s genuinely at stake when information ecosystems shift. Neither camp alone gets you there.

When you’re paying attention from the right position, the gaps become visible: the things AI doesn’t do, or can’t do, or shouldn’t do, or that a person still needs to do. Much of my career has been based on finding gaps and doing the work inside them. That instinct isn’t a survival strategy. It’s just what happens when you’re paying close enough attention. And we’ll all need to take that approach with AI.


This is a post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.