AI, Writing, and Work

What We Outsource To

January 13, 2026 · Response to: Overthink Podcast, Episode 145: AI Chatbots

Overthink is a philosophy podcast hosted by David Peña-Guzmán and Ellie Anderson, two philosophers who bring genuine rigor to public questions. Their recent episode on AI chatbots covers a lot of ground—sycophancy, the environmental costs of data centers, AI companionship, the tragic case of a teenager who took his own life after extended conversations with ChatGPT. It's a thoughtful episode, and I recommend it.

But I want to press on something they leave unexamined. Throughout the episode, they treat cognitive outsourcing to AI as obviously problematic—as something that "supplants" human subjectivity. And they never ask what I think is the crucial question: In order to rely on something as a genuine part of your thinking, does that thing itself need to understand?

The Claim

Early in the episode, David articulates a concern that has become common in discussions of AI and education:

When a student writes an essay with ChatGPT they are not learning the material... that's what's been called the cognitive debt that is introduced by LLMs and so we might also talk about an emotional debt an affective debt. Either way, it's clear that these models are doing more than aiding us. They are actually supplanting really important aspects of our very subjectivity.

— David Peña-Guzmán, Overthink Podcast

The word "supplanting" does a lot of work here. It assumes that when an external structure performs cognitive work, something is being taken from the human who uses it. The structure doesn't just assist—it replaces. And what it replaces is not merely effort but "subjectivity" itself.

This is a strong claim. And it might be right. But it needs examination, because we outsource cognitive work constantly—to structures that don't understand anything—and we don't usually describe this as a loss of subjectivity.

What We Already Outsource

Consider a map. When you use a map to navigate an unfamiliar city, the map is doing spatial reasoning that you are not doing. It represents distances, angles, relationships between locations. You don't calculate these yourself—you read them off the surface of the map. The map doesn't understand where you're trying to go. It doesn't know you exist. It's a physical structure—ink on paper, or pixels on a screen—that encodes spatial relationships in a form you can use.

Is using a map "cognitive outsourcing"? Obviously yes. You are relying on an external structure to do cognitive work that you could, in principle, do yourself. You could survey the city, measure distances, construct your own spatial model. You don't. You let the map do it.

Does the map "supplant" your spatial reasoning? In one sense, yes—you're not doing the reasoning yourself. But we don't usually describe map-users as having lost something. We describe them as navigating effectively.

Or consider mathematical notation. When you use algebraic notation to solve a problem, the notation is doing cognitive work. It carries implications, prevents certain errors, makes certain patterns visible. The notation doesn't understand the mathematics—it's a system of marks on a surface. But it shapes your thinking in ways you don't fully control and couldn't easily replicate without it.

Or consider an index. When you use a book's index to find information, you're relying on organizational work done by someone else (or some process), encoded in a structure that doesn't understand your research question. The index doesn't know what you're looking for. But you trust it to surface relevant pages.

In all these cases, we outsource cognitive work to physical structures that don't understand anything. We treat their outputs as reliable inputs to our own thinking. We don't call this "cognitive debt." We call it "using tools."

The Book and the Model

Later in the episode, David describes his research process:

One of the books that I read while doing research for this episode which is a book called A Brief History of Artificial Intelligence by Michael Wooldridge... was very illuminating for me as a non-specialist in this area.

— David Peña-Guzmán, Overthink Podcast

This is a perfectly ordinary thing to say. David read a book; the book illuminated a subject he didn't know well; now he understands it better. We do this all the time. It's called learning.

But notice what's happening. David is outsourcing his understanding of AI history to a book. He didn't derive that knowledge himself—he didn't read the primary sources, trace the debates, construct the narrative from scratch. He relied on Wooldridge's synthesis. The book compressed years of scholarship into something David could absorb in hours.

And the book, as a physical object, doesn't understand anything. It's ink on processed wood pulp. It doesn't know David is reading it. It can't adapt to his confusion or clarify his misunderstandings. It's a static physical structure that encodes traces of human thinking in a form that other humans can interact with.

Why is this form of cognitive outsourcing legitimate?

The usual answer is: because a human understood. Wooldridge understood AI history; he wrote the book; David reads the book and thereby gains access to Wooldridge's understanding. The chain is human all the way through.

But the chain passes through an object that doesn't understand. The book is an intermediary that neither thinks nor comprehends. David's understanding doesn't come from the book—it comes from his interaction with the book, which triggers cognitive processes in his own mind. The book is a tool, a structure, a piece of technology that facilitates the transfer of something between minds.

Now consider a language model. It, too, is a physical structure—silicon and electricity rather than wood pulp and ink, but a physical structure nonetheless. It, too, encodes traces of human thinking, compressed from millions of documents into weighted parameters. It, too, doesn't understand anything, at least not in the way the Overthink hosts define understanding.

When you interact with a model, something happens in your mind. You read its outputs; you evaluate them; you accept, reject, or modify them. The model is an intermediary. What you gain doesn't come from the model—it comes from your interaction with the model, which triggers cognitive processes in your own mind.

So what's the principled distinction?

The Failure to Ask

This is where I get frustrated. David and Ellie are philosophers. Philosophy is supposed to be the discipline that questions assumptions, that presses on what everyone else takes for granted, that refuses to let familiar intuitions pass without examination. And yet throughout this episode, they do exactly what everyone else does: they repeat the standard concerns without asking whether the underlying concepts hold together.

When they discuss environmental costs, they cite statistics about data center energy consumption without scrutiny—the same numbers that circulate in every AI-critical op-ed, rarely traced to primary sources, rarely compared to other industries, rarely examined for what they actually imply. When they discuss cognitive debt, they say what every worried professor has been saying since ChatGPT launched: students who use AI aren't learning. But they don't ask what "learning" means, or what "cognitive debt" actually is, or why we don't apply this framework to other forms of intellectual dependence.

The result is an episode that sounds thoughtful but isn't doing the work. The hosts assume that outsourcing to AI is obviously different from outsourcing to books, maps, notation systems, indexes. Maybe it is. But they don't make the argument. They don't even notice there's an argument to make. They treat the distinction as self-evident and move on to the next talking point.

This is what frustrates me about so much AI criticism from the humanities. The people best equipped to examine these questions—to ask what cognition is, what understanding requires, what makes a tool legitimate or illegitimate—are instead repeating memes. "Stochastic parrots." "Cognitive debt." "Supplanting our subjectivity." These phrases do the work of argument without actually being arguments. They let you sound critical while skipping the criticism.

I don't know whether AI-assisted thinking is meaningfully different from book-assisted thinking. I don't know whether the distinction between human and non-human cognitive tools is principled or arbitrary. But I know these questions need to be asked, and I know that philosophers should be the ones asking them. Instead, we get an episode that assumes its conclusions—that treats the problems with AI as obvious, the solutions as clear, and the underlying concepts as settled.

They aren't settled. And until someone does the actual philosophical work of examining what cognitive outsourcing is and when it's legitimate, we'll keep having the same shallow conversation, mistaking repetition for rigor.