When someone says they’ve been “reading,” you don’t actually know what they’ve been doing. They might have spent a week with a dense classic novel. They might have scrolled through their phone for twenty minutes. Both are reading in a technical sense: their eyes move across text and they process the words they see. However, the cognitive activities involved are so different that calling them by the same name obscures more than it reveals. One develops the capacity for sustained attention, enables the reader to enter a fictional world, and requires tracking complex characters across hundreds of pages. The other is closer to foraging. It may surface interesting and relevant information, but the cognitive work is different. Walter Ong would say these aren’t even the same species of activity. His writing argued that different communication technologies don’t just change how we do something but produce fundamentally different kinds of cognitive events.
We have this problem with AI, and it’s worse. “Using AI” currently describes a number of different activities. You may use AI to ask a chatbot what to make for dinner, to draft a briefing document, to generate data for a research study, to use a recommendation algorithm to find a movie, to vibe code, or to build a tutoring system that adapts to individual learners. These are not variations on a single activity. They involve different tools with genuinely different capabilities, different cognitive demands, different stakes, and different relationships to truth and accountability. And they don’t collapse neatly into a skill hierarchy. (We’re also all aware that some things AI does badly regardless of how well you’ve learned to work with it.)
And yet our discourse treats them as one thing. Raymond Williams, writing about what he called “keywords,” observed that certain words carry unresolved tensions precisely because different groups use them to mean fundamentally different things without realizing it. “AI” is a keyword in exactly this sense. Which means that when someone says AI is transforming education, and someone else says AI is producing misinformation at scale, and a third person says AI is going to replace libraries, they are often not talking about the same phenomenon at all. The conversation generates heat without light because we’re using a single word to point at a dozen different things.
The reading analogy is useful here because we actually worked this out with literacy. We distinguish between reading and reading critically, between reading for pleasure and reading for research, between being able to decode text and being able to evaluate an argument. A first-year writing course and a doctoral seminar both involve reading, but nobody confuses them. We built vocabulary and practices for the distinctions because we needed to teach the skills, and we needed to evaluate whether people had them.
We don’t have that vocabulary for AI yet. And the absence has the potential for damage. This lack of precise vocabulary makes it hard to even talk about AI literacy in any meaningful way, because we haven’t agreed on what the relevant skills even are. It means we can’t evaluate institutional AI practices, because we’re not being precise about which practices we’re examining. It means we can’t have a useful policy conversation, because the thing being regulated keeps shifting shape. Bowker and Star, in their work on classification, argued that collapsing categories doesn’t just muddy language. It does real epistemic and political work, obscuring accountability and making certain questions harder to ask. For example, classifying all AI use as equivalent makes it harder to hold vendors or institutions accountable. That’s what’s happening here.
This isn’t to say the work isn’t happening. Librarians and educators at many institutions are actively developing thoughtful AI literacy frameworks. But the frameworks vary considerably in scope, in assumption, in what skills they prioritize. This is, itself, evidence of the problem. We haven’t yet agreed on what we’re teaching because we haven’t yet agreed on what we’re talking about.
Libraries have always been in the business of literacy in the expansive sense: not just decoding text, but developing the critical practices that allow communities to engage meaningfully with information. That work is urgently needed here. Not “AI literacy as a single thing to be achieved,” but AI literacies as a differentiated set of practices: knowing which tool does what, understanding what accountability looks like in different contexts, recognizing when fluency is masking the absence of provenance.
Before we can teach any of that, we need to stop talking about AI as though it’s one thing, and be clearer about what we’re describing.
This is a post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.