Thinking in Public, Again

When I started blogging, I was in library school. I continued to blog into my first librarian position and into my first managerial role. I shifted away from rough-draft public thinking and into more formal presentations and service work as I moved further into administration. For one I was learning a new organizational culture and didn’t want to represent my team poorly. But I was also aware that the hierarchical asymmetry meant people within my organization might not feel they could easily push back which meant I might not know if something landed incorrectly. Given my interest in social epistemology and understanding of how positionality shapes the feedback you receive it felt that the thinking in public was in tension with my institutional role. The cost I felt from that shift wasn’t just reputational caution, I lost touch with the writing practice that kept me engaged in the intellectual work.

And now that I’m back at Wake Forest, with my feet under me, it felt like the right context to try again. This is a smaller organization in which I can know everyone that works in the library, some of whom remember my earlier blogging practice. I know the University very well, across decades. I also think that there is enough trust that people would tell me (or they would tell someone who could tell me) if something I wrote made their lives harder.

I started this blog because I had accumulated a lot of thoughts about AI and librarianship: threads of ideas that I hadn’t yet woven together. I thought I could write about them publicly, hear responses, evolve my thinking, and find people interested in the same slice of the issue I can’t stop thinking about. If it went well, maybe the content would find its way to an article or a presentation.

I’ve been at it long enough now that I’m beginning to remember what blogging meant to me professionally. Ideas lead to more ideas. Each time I post something I find three more tangents that I want to pin down. I’m reading less out of obligation and more because something might connect to what I’m already thinking about. Giving myself permission to write is also giving myself permission to think seriously about things that don’t have an obvious deliverable, and it’s also a way to give the people I work with a window into that thinking if they want one. I’ve worked in enough organizations in which I wanted to know a bit more about how people in leadership were thinking about issues that were emergent that I know it can be helpful to have access to some of that interior process.

So the practice is good for me whether or not it finds a reader, though I do have a reader in mind. This blog is my attempt to translate the critical AI literature for people with institutional power to act on it: whether that’s a librarian working directly with a patron, or a dean, or a provost making decisions about how their institution engages with these tools. There are thoughtful people focused on the practical dimensions of AI use, and equally thoughtful people working on the theoretical and critical dimensions. I find myself most interested in bridging those two areas. The critical literature exists, the practitioner literature exists, and there’s a space between them, and that’s where I’m trying to work. And that’s shown up across posts, whether it’s exploring how AI’s authoritative tone obscures what it returns, or how a broad view of information literacy points the way towards what AI literacy may be.

Most people doing this work have never needed the word “epistemic” to do librarianship well, and that’s exactly the point. The framework isn’t new vocabulary for new work. It’s a way of seeing what was already there. That’s what’s happening here: naming what librarians have always known how to do, so that knowledge doesn’t get lost in the moment that most needs it.

The Prior Knowledge Problem

AI doesn’t create the knowledge gap, but it has made it harder to ignore. The people most likely to turn to AI for information are often the same people least equipped to evaluate what it returns. This isn’t a literacy problem that better prompting skills can solve. It’s an epistemic inequality problem, and AI has made it newly urgent and critical. As a librarian, I keep coming back to the fact that we’ve known how to address this kind of problem before. In fact, we built a whole profession around it.

My own reflective AI use actually looks something like this: before opening a platform, I understand the project well enough to know what I actually need. I try my best to stay current enough to know the landscape of available tools, their relative strengths, and the privacy implications of each. Because of this, I can match tool to task because I already understand the task deeply.

During the interaction I bring sources, theorists, and a developing argument to the conversation. I recognize when the AI is being sycophantic, I push back on what I am given, and I know what a reasonable critic of my own idea would say, so I ask for that too. I’m not asking for new information, but rather I’m testing the borders of my own ideas and thinking with a tool.

After an exchange, I read the output against what I already know. I remind myself to spot confident wrong answers. I know the norms and conventions of wherever the thing I’m working on is going to land, and I adapt accordingly.

Everything in that description: the project clarity, the prior sources, the ability to recognize flattery, the evaluative judgment, is something I brought to the task before I opened the platform. That’s not AI literacy, it’s the type of knowledge a person spends a career building.

Now ask who else brings that type of approach to an AI interaction. Probably not the first-generation college student writing a paper on an unfamiliar topic, or the person navigating a health diagnosis without medical training, or the employee asked to use AI tools they’ve had no preparation to use. These are precisely the people most likely to turn to AI for help, and the least equipped to evaluate what comes back.

This isn’t about intelligence or effort. It’s about prior knowledge, and prior knowledge is not evenly distributed. It accumulates through education, professional experience, and access to institutions that build it deliberately over time. The people who have the most of it are also the people who need AI the least.

But what is most challenging at this moment is that AI doesn’t present itself as uncertain. It doesn’t say I’m not sure about this or you might want to check in with a specialist. It answers confidently, with authority. A reader without the prior knowledge to push back has virtually no indication that anything could be wrong. Prompting skills can’t close this gap. That person needs more access to knowledge, tools, and an understanding of how their information landscape is changing.

This is not a new problem. Unequal access to information, unequal ability to evaluate it, unequal understanding of how knowledge is produced and organized are all problems that we have understood for a long time. They’re why we have public libraries, school librarians, academic research support, and the entire infrastructure of information literacy instruction.

We built institutions and a profession around the idea that people shouldn’t have to navigate complex information landscapes alone. That expertise (knowing the tools, knowing the landscape, knowing how to match need to resource, and knowing how to evaluate what comes back) is what librarians do and have always done.

I know this is not simple. Most librarians I talk with are already at or past capacity, and “just do more” isn’t an answer. But I don’t think this is a moment of something new being added on top. Since I came into this field I’ve watched our users move increasingly online, and that shift has always shaped what services we develop, what approaches we prioritize, and what we let go. It has never meant everyone wholesale adopting an entirely new domain overnight. It has meant paying attention to where people are, picking up adjacent skills incrementally, and slowly integrating them until they become just one more tool in our toolkit for working with our community.

And here we are again. The landscape is shifting in ways that directly implicate what we already know how to do. And as William Gibson observed, “the future is already here — it’s just not evenly distributed.” There are librarians already deep in this work, developing new competencies and integrating AI into how they serve their communities. There are others paying cautious attention. Most of us are somewhere along that spectrum. And that is how a profession evolves. The question isn’t whether this is our job. The question is how we develop into it, the same way we always have.


This is a post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.

The Knowledge We’ve Always Built

There’s something genuinely strange about this moment for libraries. The tools that seem most likely to make us obsolete are also the ones that most clearly reveal what we were doing all along. More information is available than ever before, synthesized, immediate, apparently authoritative. And yet the questions that matter most are only getting harder to answer. What’s trustworthy? How do you know? Who decides? Libraries have been working on those questions for a long time. We just didn’t always have to say so out loud.

Information vs. Knowledge

When I hear excitement about AI, it’s almost always about access to information. And access to information is genuinely useful. But having spent a lot of time with the data-information-knowledge-wisdom framework1, I’m aware that information and knowledge aren’t the same thing. Moving from one to the other requires context: understanding the nuance of what you’re seeing, where it came from, and how it fits into what you already know.

Libraries are centered around exactly that work. We pay attention to publishers, to trends in a literature, to publication types. We help students understand why a publication date matters, whether a study is quantitative or qualitative, how to evaluate whether a source actually supports the argument they’re building.

Outcomes not metrics

Libraries have a long history of understanding the need to demonstrate their value. One place we turn are the metrics and statistics we can share with stakeholders to prove the community benefits from their library. We count the number of items in our collection, the number of people through the door, the number of reference consultations we provide, and the number of classes we teach. Those numbers were the right answer to the questions we were being asked, but AI is changing the questions.

AI clarifies this for us. When information is available from anywhere, talking about access becomes less useful. We have to say something truer about what we actually do, and that means recovering language for work we’ve been doing all along.

Libraries were never only about access. They were about the social infrastructure of knowledge: the systems through which communities come to know things together, evaluate what’s trustworthy, and preserve the conditions for doing that work well. Margaret Egan and Jesse Shera understood this in the 1950s2. The ACRL Framework for Information Literacy, with its insistence that “authority is constructed and contextual,” is evidence that the profession has been moving toward that understanding for years. AI didn’t create this argument. It just made it impossible to avoid.

This is where the library’s expertise becomes irreplaceable, and exactly the area that is at risk when people accept AI outputs without understanding what’s missing. Librarians learn context not as background information but as the substance of the work. Understanding how knowledge is produced: the research process, peer review, publication venues, the difference between a preprint and a published study, is what makes it possible to build collections worth preserving, and to help students and faculty find not just information but validated knowledge they can actually build on.

Load bearing shoulders

In trying to find a theory or framework to describe the importance of scaffolding knowledge and the expertise that librarians bring to this work, I keep being drawn to constructivism. These days I keep coming back to it as useful framing for how scholarship itself works.

Constructivism holds that knowledge builds on existing knowledge, and research articles are grounded in literature reviews, citations, and peer review. It describes knowledge as socially constructed through dialogue, which happens in research as the ACRL Framework describes “scholarship as conversation.” It requires authentic context, which is exactly what AI strips out. And it expects active engagement with ideas, not passive receipt of synthesized outputs.

You can only stand on the shoulders of giants if someone has been paying attention to which shoulders are load-bearing. And there is an entire profession doing exactly that work.

In all the AI discourse I continue to think about what it means for librarianship. I know that we will always be in the business of access to information. But I can’t help believing we’ll shift towards centering knowledge in the future, and I am thinking about what that might mean for the work. I’m curious what you see from your position in the field.

  1. Ackoff, Russell (1989). “From Data to Wisdom.” Journal of Applied Systems Analysis. 16: 3–9. ↩︎
  2. Egan, Margaret E. and Shera, Jesse H. (1952). “Foundations of a Theory of Bibliography.” The Library Quarterly. 22.2: 125–137. ↩︎

This is a post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.

The Middle of the AI Conversation Is Where the Work Is

There are two camps in most AI conversations, and by now you probably know which one is yours. In one camp: the enthusiasts, the builders, the people who are busy using the technology so regularly that they barely have time to read and reflect on the critiques. In the other: the scholars and critics who have built real intellectual credibility on careful distance from hype, for whom deep hands-on engagement with the tools has become something like an ethical compromise, a sign of capture, naivety, or insufficient rigor.

This divide isn’t only a values disagreement: It’s also, and maybe more importantly, a structural one. Both camps are operating under real social and professional constraints that make crossing over genuinely costly. The enthusiasts aren’t ignoring the critique literature because they’re incurious. They’re busy building with and testing the tools. The critics aren’t avoiding the technology because they’re afraid of what they’d find. Spending time experimenting with the tools risks their credibility with the people whose respect they’ve earned. Neither camp is wrong about their own situation. The incentive structures just happen to produce a conversation where the people with the most exposure to the technology’s actual affordances aren’t positioned to think carefully about what’s at stake, and the people who are positioned to do that thinking aren’t getting sustained hands-on experience. Both kinds of knowledge exist. They’re just not in the same room or social media threads.

The discourse looks different face to face. Online, the two camps dominate. In actual conversation I find far more people who are genuinely uncertain, watching, and waiting. They’re quieter, because holding the question open doesn’t perform as well as certainty in most public discourse. But those waiting and watching are having experiences that shape their perception of these tools. What this middle group needs isn’t more evidence: they’ve seen the studies and the horror stories. What they need is language for the tension they’re already living. They are looking for a framework that doesn’t ask them to resolve the complexity before they’re ready, and one that that lets them continue to learn from the middle.

Libraries can’t afford to pick a side, though it seems that many have. Some rushed toward enthusiasm, repositioning themselves as AI integration hubs before anyone had language for what was actually changing. Others have planted flags in critique, which is intellectually serious but leaves them unable to advise the faculty member who just needs to know what to do with their upcoming research assignment.

I have conversations regularly with people who ask me, without malice, whether libraries will exist in a world with AI. It’s a real question and it deserves a considered answer. But it’s only possible to answer it from the middle because the answer requires understanding what the technology actually does and doesn’t do, and being able to name what’s genuinely at stake when information ecosystems shift. Neither camp alone gets you there.

When you’re paying attention from the right position, the gaps become visible: the things AI doesn’t do, or can’t do, or shouldn’t do, or that a person still needs to do. Much of my career has been based on finding gaps and doing the work inside them. That instinct isn’t a survival strategy. It’s just what happens when you’re paying close enough attention. And we’ll all need to take that approach with AI.


This is a post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.

When ‘Probably’ Means Nothing

When I moved to the Pacific Northwest I was surprised how much people volunteered to me that they loved the Southern word “y’all.” It’s a great inclusive way to call a group together or refer to a team. I love it, too. But my favorite Southern phrase is “might could.” It’s double-hedged, which may appear to be redundant or imprecise, but actually it’s the opposite. It’s a finely calibrated expression of a qualified possibility that a single modal can’t quite capture. “Could” alone is too open. “Might” alone is too tentative. “Might could” lands somewhere specific that neither word reaches on its own. It’s also situated. You know something about the speaker when they say it. It carries place, community, a whole set of social relations. Which is exactly what Haraway is talking about in situated knowledge.

Hedging language can be perceived as negative or as an indication that the speaker isn’t confident. But in academic circles it often is interpreted as a signal of some epistemic humility or recognition that the concept has enough complexity that you need a bit of hedging to remain accurate. When a scientist says “probably,” a doctor says “likely,” a colleague says “I’m fairly certain,” those words are doing the real epistemic work of communicating a speaker’s actual relationship to uncertainty, calibrated by experience, context, and stakes. It’s worth reflecting on what is lost if these turns of phrase are stripped of their nuance.

When I read ‘Probably’ Doesn’t Mean the Same Thing to Your AI as it Does to You, I was struck that our LLMs may not be using hedging language in the way that we do. LLMs use words like “probably,” “likely,” and “almost certain” inconsistently, averaging over conflicting usages in training data rather than assessing actual odds. The article also points to an interesting intersection with gender studies, showing that the same probability expressed differently depending on whether the prompt says “he” or “she.”

This is a really specific kind of epistemic failure, and an interesting one! Hedging language is how knowledge communities signal the limits of what they know. Strip that calibration out and you get fluency that performs humility while enacting the view from nowhere. This is Haraway’s god trick at the lexical level. We’re moving beyond the synthesis of sources and into in individual word choices.

We’ve all seen use cases in which AI in increasingly being used to summarize research, brief decision-makers, and mediate information. We also are all aware of the conflicting views on to what extent that information is actually good. For now, at least, it seems that we may also have to consider the word choice itself. When the methods we have to convey certainty lose their clarity we may find ourselves being overconfident in our interpretation of words, only to find we’ve made decisions without the information we assumed was supporting our path. Things appear as they were, but in reality the world shifted around us. We read “probably” and think we know how confident to be, but the word has already lost its weight.


This is a post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.

The Categorical Collapse of AI

When someone says they’ve been “reading,” you don’t actually know what they’ve been doing. They might have spent a week with a dense classic novel. They might have scrolled through their phone for twenty minutes. Both are reading in a technical sense: their eyes move across text and they process the words they see. However, the cognitive activities involved are so different that calling them by the same name obscures more than it reveals. One develops the capacity for sustained attention, enables the reader to enter a fictional world, and requires tracking complex characters across hundreds of pages. The other is closer to foraging. It may surface interesting and relevant information, but the cognitive work is different. Walter Ong would say these aren’t even the same species of activity. His writing argued that different communication technologies don’t just change how we do something but produce fundamentally different kinds of cognitive events.

We have this problem with AI, and it’s worse. “Using AI” currently describes a number of different activities. You may use AI to ask a chatbot what to make for dinner, to draft a briefing document, to generate data for a research study, to use a recommendation algorithm to find a movie, to vibe code, or to build a tutoring system that adapts to individual learners. These are not variations on a single activity. They involve different tools with genuinely different capabilities, different cognitive demands, different stakes, and different relationships to truth and accountability. And they don’t collapse neatly into a skill hierarchy. (We’re also all aware that some things AI does badly regardless of how well you’ve learned to work with it.)

And yet our discourse treats them as one thing. Raymond Williams, writing about what he called “keywords,” observed that certain words carry unresolved tensions precisely because different groups use them to mean fundamentally different things without realizing it. “AI” is a keyword in exactly this sense. Which means that when someone says AI is transforming education, and someone else says AI is producing misinformation at scale, and a third person says AI is going to replace libraries, they are often not talking about the same phenomenon at all. The conversation generates heat without light because we’re using a single word to point at a dozen different things.

The reading analogy is useful here because we actually worked this out with literacy. We distinguish between reading and reading critically, between reading for pleasure and reading for research, between being able to decode text and being able to evaluate an argument. A first-year writing course and a doctoral seminar both involve reading, but nobody confuses them. We built vocabulary and practices for the distinctions because we needed to teach the skills, and we needed to evaluate whether people had them.

We don’t have that vocabulary for AI yet. And the absence has the potential for damage. This lack of precise vocabulary makes it hard to even talk about AI literacy in any meaningful way, because we haven’t agreed on what the relevant skills even are. It means we can’t evaluate institutional AI practices, because we’re not being precise about which practices we’re examining. It means we can’t have a useful policy conversation, because the thing being regulated keeps shifting shape. Bowker and Star, in their work on classification, argued that collapsing categories doesn’t just muddy language. It does real epistemic and political work, obscuring accountability and making certain questions harder to ask. For example, classifying all AI use as equivalent makes it harder to hold vendors or institutions accountable. That’s what’s happening here.

This isn’t to say the work isn’t happening. Librarians and educators at many institutions are actively developing thoughtful AI literacy frameworks. But the frameworks vary considerably in scope, in assumption, in what skills they prioritize. This is, itself, evidence of the problem. We haven’t yet agreed on what we’re teaching because we haven’t yet agreed on what we’re talking about.

Libraries have always been in the business of literacy in the expansive sense: not just decoding text, but developing the critical practices that allow communities to engage meaningfully with information. That work is urgently needed here. Not “AI literacy as a single thing to be achieved,” but AI literacies as a differentiated set of practices: knowing which tool does what, understanding what accountability looks like in different contexts, recognizing when fluency is masking the absence of provenance.

Before we can teach any of that, we need to stop talking about AI as though it’s one thing, and be clearer about what we’re describing.


This is a post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.

The Obsolescence Argument Has It Backwards

Everyone seems to agree that artificial intelligence is going to change education, research, and libraries. The disagreement is about direction. The dominant narrative, at least in some technology circles is: AI can find information, synthesize sources, and answer questions. It’s not a surprise that people hearing that argument in media and from tech commentators point out that libraries and librarians do those things and then assume that libraries are in trouble.

But to anyone who sits at the intersection of technology and libraries it’s abundantly clear that AI doesn’t make libraries obsolete, but rather it makes them more essential.


I’ve been thinking about knowledge systems for a long time. My undergraduate degrees were in philosophy and in communication, with a minor in Women’s and Gender Studies, and the questions that animated these fields were the same ones: Who knows? Under what conditions? With what authority, and on whose behalf? Those questions led me to library science, and they’ve shaped how I’ve understood this work ever since.

Two frameworks have always been particularly generative for me. The first is social epistemology. This term was developed by Jesse Shera and Margaret Egan in the mid-twentieth century, which understands libraries not as warehouses of information but as infrastructure for how communities produce and share knowledge. Libraries, in this view, are epistemic institutions. They don’t just store what we know; they shape the conditions under which knowing is possible. (Incidentally social epistemology also developed within Philosophy, with a slightly different implementation, a few decades later.)

The second is feminist epistemology, particularly Donna Haraway’s concept of situated knowledges. Haraway’s argument, made in a landmark 1988 essay, is that all knowledge is produced from somewhere: from a particular body, a particular history, a particular set of social relations. Claims to view-from-nowhere objectivity, what she calls the “god trick,” are not neutral. They are themselves a kind of power move, one that erases the conditions of knowledge production and forecloses accountability. Sandra Harding’s standpoint theory extends this: knowledge produced from the margins, from positions of accountability rather than dominance, is often more comprehensive, not less, because it cannot afford to ignore what the center takes for granted.

These frameworks were developed to critique science. But you can see why I keep coming back to them today.


Large language models perform exactly the god trick Haraway identified. They synthesize at scale without provenance. They produce authoritative-sounding outputs whose origins are opaque, whose training data encodes historical power imbalances, and whose confident tone actively discourages the epistemic humility that good inquiry requires. They are, in Harding’s terms, knowledge produced from nowhere. And this means they are making claims from a position that cannot be held accountable.

This is not primarily a technical problem. It is an epistemic one. And it is precisely the problem that libraries, at their best, are structured to address.

Libraries curate situated knowledge. They preserve provenance. They maintain the bibliographic infrastructure that allows a reader to ask: who said this, when, from what position, in conversation with whom? They select, describe, and organize materials in ways that make the conditions of knowledge production visible rather than erasing them. They employ people (librarians!) whose professional expertise is not only finding information but teaching the critical practices that allow communities to evaluate it.

None of that is replicable by a system that has been specifically designed to flatten those distinctions into fluent prose.


I’m not arguing that AI is useless, or that libraries should resist it, or that the landscape isn’t changing. It is changing, and libraries need to engage with that change thoughtfully and without too much nostalgia. What I am arguing against is the idea that AI supersedes libraries. When someone asks whether AI makes libraries obsolete, the questioner implicitly accepts a definition of libraries as information retrieval systems. That is a definition that was always reductive and is now actively misleading. Libraries are epistemic infrastructure. They are, in Shera and Egan’s terms, the social mechanisms through which communities organize their relationship to knowledge.

AI doesn’t replace that. It creates new urgency for it.

The more our information environment is shaped by systems that perform objectivity while encoding power, the more we need institutions committed to making those dynamics visible. As synthetic text becomes more fluent and authoritative, it will become more important for human thinking to maintain the skills in citation, provenance, critical evaluation, and the slow work of understanding where knowledge comes from. These are the skills that libraries cultivate.

The obsolescence argument has it exactly backwards. This is the moment libraries were built for.


This is the first post in an ongoing project exploring libraries, knowledge, and the epistemic stakes of artificial intelligence. I’m drawing on social epistemology, feminist theory, and two decades of practice in academic libraries.

Before we begin

Years ago I kept a blog (at this URL, even!) where I thought out loud about libraries, knowledge, and the profession I’d built my career around. I was good at it for a while, and then I wasn’t, and then I stopped for all the usual reasons: changing life phase, less personal time to spend on it, increasingly demanding institutional role, the way the platforms evolved from places of earnest and open discussion… I drifted so far away from blogging and this website that when a back up didn’t capture all the files I wasn’t even all that disappointed.

But lately I’ve really missed thinking in public with other colleagues interested in exploring the same ideas. And lately I’ve been thinking a lot about academic libraries, our information environment, and the ways we talk about and use artificial intelligence.

AI is reshaping how people find, evaluate, and trust information. Within libraries we have people all across the spectrum: from those who fully embrace it to those who believe it has no place near our work. One of the dominant narratives outside of the profession treats libraries as information retrieval systems and concludes that AI makes them redundant. This framing mistakes the symptom for the disease. Libraries are epistemic infrastructure. They are the mechanisms through which communities organize their relationship to knowledge. AI doesn’t replace that, but it does make that role all the more urgent.

This lens keeps coming up for me in conversations in varied spheres. Jesse Shera and Margaret Egan’s social epistemology, which understands libraries not as warehouses but as institutions that shape the conditions under which knowing is possible, is foundational to how I think about this work. So is feminist epistemology, particularly Donna Haraway’s concept of situated knowledges and Sandra Harding’s standpoint theory. These frameworks were built to interrogate science. But it turns out that they are extremely useful when interrogating AI as well.

I’m writing as a person who has spent two decades in academic libraries and who has been thinking about knowledge, power, and institutions since an undergraduate philosophy degree made those questions unavoidable. At this URL, I am not writing as an institutional voice. This is a thinking space. I’m hoping that arguments will develop, get complicated, and occasionally get revised. I expect to adapt to new information.

What follows this post is the first real argument: why the obsolescence narrative has it backwards, and what a clearer account of libraries and knowledge reveals about the epistemic stakes of this moment.

I’m still trying to understand where people talk about these things today. In some ways everything was a lot cleaner when the answer was a blog with open comments, an RSS reader, and Twitter. The messiness of our knowledge environment today (LinkedIn? Bluesky? Mastodon? SubStack? Chat threads? Everywhere?) resonates with the messiness of the information ecosystem I’m trying to write about.