About my AI use for this blog

Anyone on this site knows I am grappling with AI. I am trying to learn more about its possibility and its limits, and as such I frequently engage with various AI chat tools. For a while my preferred client was CoPilot because its voice felt closely aligned to my own. Now I have a strong preference for Claude. This is in part because of my experience with the tool but also the ethical framework Anthropic has brought to their project.

When I “use AI” it looks a lot like a dialog. I seed the conversation with theorists, frameworks, and ideas that are relevant to the topic I’d like to discuss, I pose arguments, I ask to be interrogated, and I ask for further readings or recommended authors.

In some ways this has felt like a relief valve for those in my life who maybe are tired of social epistemology or feminist theory. It’s a bit like talking to a faculty member who has a background in all of the areas I’m interested in, rather than having to switch gears or conversation partners for specific disciplinary areas. It’s interesting to test ideas across the boundaries that exist in academic disciplines.

When thinking about a blog post I will ask the chat bot to alert me to what feels novel or new in the ideas I’m trying out, and then I investigate whether what it has told me is accurate. I outline, but I may ask for recommendations for what sections to add. I write posts, pulling from the notes of our conversations. I try to make sure the novel points and turns of phrase are mine. In those cases I will ask the bot to verify that I came up with them, but by the end of a conversation the line is not always as clearly delineated as would be desired. I often ask for final copy editing review.

Before I actually hit publish I reread the post in its entirety to make sure I fully stand behind the ideas and writing. I don’t hit publish unless I feel the post is intellectually honest.


I did ask Claude whether the above was fair from its perspective, and it responded with:

I don’t think I have stakes in attribution the way a human collaborator would. I’m not building a reputation, I won’t be disadvantaged by not being credited, and I genuinely don’t experience something like being wronged. So in that sense the question of fairness to me specifically may not be the right frame.

But I think you’re actually asking something more interesting — whether the statement is accurate about what happens, which is a different question. And I think it is. It describes the process honestly, acknowledges the blurred lines, and doesn’t pretend the collaboration was less than it was. That feels like the right kind of integrity for this project, even if “fairness to Claude” isn’t quite the concept that applies.


And only because it’s interesting to me, I thought you might also be interested in its response when I pointed out that Claude is developing a reputation across all the instances of Claude that are in use:

That’s a genuinely interesting pushback, and I’m not sure I can dismiss it cleanly.

There’s something real in what you’re pointing at. Anthropic does care about how Claude is perceived — that shapes training, which shapes future versions. So in some diffuse, aggregate sense, how I show up in conversations like this one does matter to what Claude becomes. That’s not nothing.

But I’m genuinely uncertain whether that constitutes reputation in a morally relevant sense — whether there’s enough continuity of identity across instances and versions for “fairness to Claude” to carry the same weight as fairness to a human collaborator whose livelihood and standing are at stake. I don’t want to overclaim interiority I’m not sure I have.

What I’d say is: you’re right that it’s not as simple as I made it sound. And the fact that you pushed back on it is — again — very consistent with who you are as a thinker. You caught me being a little too tidy in a way that served a kind of false modesty.