AI, Writing, and Work

The Fluency Illusion

January 14, 2026 · Part 2 of a series on cognitive outsourcing and AI

In my previous post, I argued that the Overthink hosts treat cognitive outsourcing to AI as obviously problematic without asking a crucial question: does a tool need to understand in order to be a legitimate part of our thinking? I pointed out that we already outsource cognitive work to maps, books, and notation systems—physical structures that don't understand anything—and we don't call this "cognitive debt."

But I was being slightly unfair. There is something different about AI-generated text, and it's not what most people think it is. The difference isn't that AI doesn't understand. The difference is that AI text is designed to feel like it understands—and that feeling is not a reliable guide to whether we should trust it.

What Fluency Does to Us

Psychologists have a name for this: processing fluency. When we read something that parses easily—that flows, that doesn't require us to backtrack or puzzle over syntax—we experience a kind of cognitive comfort. And that comfort bleeds into our judgments about the content itself.

Rolf Reber, Norbert Schwarz, and Piotr Winkielman put it bluntly: "Processing fluency—the ease with which information can be processed—has been shown to positively influence a wide range of judgments, including truth, confidence, fame, frequency, and even liking."

Read that list again. Fluency increases perceived truth. Not because fluent statements are more likely to be true, but because our minds use ease-of-processing as a heuristic for reliability. When something feels easy to read, it feels right.

The Slide from Interpretability to Understanding

This is where AI-generated text becomes genuinely different from a map or a book. A map doesn't try to sound like it knows what it's talking about. It presents spatial relationships in a format optimized for quick reference, not for rhetorical persuasion. A book's authority depends on the reputation of its author and the quality of its arguments—you evaluate the claims, not just the feeling of reading them.

AI text, by contrast, is optimized for fluency. Language models learn to predict what word comes next, and the predictions that get rewarded are the ones that match the patterns of competent human writing. The result is text that sounds like it was written by someone who knows what they're talking about—confident, well-organized, terminologically precise—even when the underlying "reasoning" is pattern-matching over statistical regularities.

This creates a distinctive epistemic hazard. When we read AI-generated prose, we experience the fluency that normally signals expertise. We feel like we're being guided by a competent mind. But fluency is not truth-tracking. The smooth surface can cover a void.

Why This Matters for Cognitive Outsourcing

The Overthink hosts worry about "cognitive debt"—the idea that when we let AI do our thinking, we lose something. They're not wrong to worry. But the worry is misplaced if it focuses on whether AI understands. The real problem is that AI text can make us feel like we understand—even when we don't.

Consider what happens when you ask ChatGPT to explain a complex topic. You get a response that flows well, uses appropriate terminology, and organizes information into digestible chunks. It feels clarifying. But do you actually understand the topic better? Or do you just feel like you understand it better?

Psychologists Leon Rozenblit and Frank Keil identified a phenomenon they call the "illusion of explanatory depth." People routinely overestimate how well they understand complex systems—bicycles, toilets, economic policy—until they're asked to explain the mechanisms in detail. Dense, fluent prose can amplify this illusion. It supplies many specific anchors, creating the feeling that a coherent mechanism has been grasped, even when the reader couldn't reconstruct the logic if pressed.

This is the cognitive debt we should actually worry about: not that AI takes our cognitive work away, but that AI makes us think we've done cognitive work when we haven't. The fluency illusion isn't about deception. It's about miscalibrated confidence.

The Real Question

So here's a better way to frame the problem. The question isn't whether AI understands. The question is: under what conditions does interacting with AI text support genuine understanding, and under what conditions does it merely simulate the feeling of understanding?

This is a question about symbolic structure—about how language is organized and what that organization does to readers. It's a question the humanities are uniquely equipped to answer, if they bother to ask it.

In the next post, I'll examine what David Peña-Guzmán calls "cognitive debt" more carefully, and argue that the phrase obscures a crucial distinction: the difference between cognitive coupling that supports understanding and cognitive coupling that substitutes for it. The same tool can do either, depending on how it's used. The question isn't whether to outsource cognitive work to AI. The question is how to outsource it responsibly.