Tag: Large Language Models

  • The Obsolescence Argument Has It Backwards

    Everyone seems to agree that artificial intelligence is going to change education, research, and libraries. The disagreement is about direction. The dominant narrative, at least in some technology circles is: AI can find information, synthesize sources, and answer questions. It’s not a surprise that people hearing that argument in media and from tech commentators point out that libraries and librarians do those things and then assume that libraries are in trouble.

    But to anyone who sits at the intersection of technology and libraries it’s abundantly clear that AI doesn’t make libraries obsolete, but rather it makes them more essential.


    I’ve been thinking about knowledge systems for a long time. My undergraduate degrees were in philosophy and in communication, with a minor in Women’s and Gender Studies, and the questions that animated these fields were the same ones: Who knows? Under what conditions? With what authority, and on whose behalf? Those questions led me to library science, and they’ve shaped how I’ve understood this work ever since.

    Two frameworks have always been particularly generative for me. The first is social epistemology. This term was developed by Jesse Shera and Margaret Egan in the mid-twentieth century, which understands libraries not as warehouses of information but as infrastructure for how communities produce and share knowledge. Libraries, in this view, are epistemic institutions. They don’t just store what we know; they shape the conditions under which knowing is possible. (Incidentally social epistemology also developed within Philosophy, with a slightly different implementation, a few decades later.)

    The second is feminist epistemology, particularly Donna Haraway’s concept of situated knowledges. Haraway’s argument, made in a landmark 1988 essay, is that all knowledge is produced from somewhere: from a particular body, a particular history, a particular set of social relations. Claims to view-from-nowhere objectivity, what she calls the “god trick,” are not neutral. They are themselves a kind of power move, one that erases the conditions of knowledge production and forecloses accountability. Sandra Harding’s standpoint theory extends this: knowledge produced from the margins, from positions of accountability rather than dominance, is often more comprehensive, not less, because it cannot afford to ignore what the center takes for granted.

    These frameworks were developed to critique science. But you can see why I keep coming back to them today.


    Large language models perform exactly the god trick Haraway identified. They synthesize at scale without provenance. They produce authoritative-sounding outputs whose origins are opaque, whose training data encodes historical power imbalances, and whose confident tone actively discourages the epistemic humility that good inquiry requires. They are, in Harding’s terms, knowledge produced from nowhere. And this means they are making claims from a position that cannot be held accountable.

    This is not primarily a technical problem. It is an epistemic one. And it is precisely the problem that libraries, at their best, are structured to address.

    Libraries curate situated knowledge. They preserve provenance. They maintain the bibliographic infrastructure that allows a reader to ask: who said this, when, from what position, in conversation with whom? They select, describe, and organize materials in ways that make the conditions of knowledge production visible rather than erasing them. They employ people (librarians!) whose professional expertise is not only finding information but teaching the critical practices that allow communities to evaluate it.

    None of that is replicable by a system that has been specifically designed to flatten those distinctions into fluent prose.


    I’m not arguing that AI is useless, or that libraries should resist it, or that the landscape isn’t changing. It is changing, and libraries need to engage with that change thoughtfully and without too much nostalgia. What I am arguing against is the idea that AI supersedes libraries. When someone asks whether AI makes libraries obsolete, the questioner implicitly accepts a definition of libraries as information retrieval systems. That is a definition that was always reductive and is now actively misleading. Libraries are epistemic infrastructure. They are, in Shera and Egan’s terms, the social mechanisms through which communities organize their relationship to knowledge.

    AI doesn’t replace that. It creates new urgency for it.

    The more our information environment is shaped by systems that perform objectivity while encoding power, the more we need institutions committed to making those dynamics visible. As synthetic text becomes more fluent and authoritative, it will become more important for human thinking to maintain the skills in citation, provenance, critical evaluation, and the slow work of understanding where knowledge comes from. These are the skills that libraries cultivate.

    The obsolescence argument has it exactly backwards. This is the moment libraries were built for.


    This is the first post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.