in Libraries & AI

The Middle of the AI Conversation Is Where the Work Is

There are two camps in most AI conversations, and by now you probably know which one is yours. In one camp: the enthusiasts, the builders, the people who are busy using the technology so regularly that they barely have time to read and reflect on the critiques. In the other: the scholars and critics who have built real intellectual credibility on careful distance from hype, for whom deep hands-on engagement with the tools has become something like an ethical compromise, a sign of capture, naivety, or insufficient rigor.

This divide isn’t only a values disagreement: It’s also, and maybe more importantly, a structural one. Both camps are operating under real social and professional constraints that make crossing over genuinely costly. The enthusiasts aren’t ignoring the critique literature because they’re incurious. They’re busy building with and testing the tools. The critics aren’t avoiding the technology because they’re afraid of what they’d find. Spending time experimenting with the tools risks their credibility with the people whose respect they’ve earned. Neither camp is wrong about their own situation. The incentive structures just happen to produce a conversation where the people with the most exposure to the technology’s actual affordances aren’t positioned to think carefully about what’s at stake, and the people who are positioned to do that thinking aren’t getting sustained hands-on experience. Both kinds of knowledge exist. They’re just not in the same room or social media threads.

The discourse looks different face to face. Online, the two camps dominate. In actual conversation I find far more people who are genuinely uncertain, watching, and waiting. They’re quieter, because holding the question open doesn’t perform as well as certainty in most public discourse. But those waiting and watching are having experiences that shape their perception of these tools. What this middle group needs isn’t more evidence: they’ve seen the studies and the horror stories. What they need is language for the tension they’re already living. They are looking for a framework that doesn’t ask them to resolve the complexity before they’re ready, and one that that lets them continue to learn from the middle.

Libraries can’t afford to pick a side, though it seems that many have. Some rushed toward enthusiasm, repositioning themselves as AI integration hubs before anyone had language for what was actually changing. Others have planted flags in critique, which is intellectually serious but leaves them unable to advise the faculty member who just needs to know what to do with their upcoming research assignment.

I have conversations regularly with people who ask me, without malice, whether libraries will exist in a world with AI. It’s a real question and it deserves a considered answer. But it’s only possible to answer it from the middle because the answer requires understanding what the technology actually does and doesn’t do, and being able to name what’s genuinely at stake when information ecosystems shift. Neither camp alone gets you there.

When you’re paying attention from the right position, the gaps become visible: the things AI doesn’t do, or can’t do, or shouldn’t do, or that a person still needs to do. Much of my career has been based on finding gaps and doing the work inside them. That instinct isn’t a survival strategy. It’s just what happens when you’re paying close enough attention. And we’ll all need to take that approach with AI.


This is a post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.