AI, Writing, and Work

Thinking at a Higher Level

January 11, 2026 · Response to: Ted Chiang Q&A at Princeton CDH and Steve Yegge's Vibe Coding Manifesto

Ted Chiang has become one of the most respected voices warning against AI in creative and intellectual work. In a March 2025 Q&A at Princeton's Center for Digital Humanities, Chiang offered direct guidance to students: essay writing serves as cognitive training; using ChatGPT "undermines education's purpose by enabling students to avoid the mental exertion necessary for intellectual growth." The argument is elegant: writing is thinking, AI removes the writing, therefore AI removes the thinking. But what if that argument rests on a confusion about where thinking actually happens?

The Argument

Chiang's position represents a widely-held view in literary and educational circles. Writing is not merely the transcription of thought—it is thought. The struggle to find the right word, to construct a sentence that captures what you mean, to revise and reconsider—this is where intellectual development occurs. When you use AI to generate text, you skip this struggle. You outsource the very process that was supposed to be training your mind.

This view treats the sentence as the fundamental unit of thought. Crafting sentences is cognitive work; delegating sentence-craft to machines means delegating cognition itself. It follows that students who use AI to write are not learning to think—they're learning to avoid thinking.

The Programmer's Parallel

In late December 2025, legendary programmer Steve Yegge sat down for a wide-ranging conversation about what he calls "vibe coding"—using AI agents to write code rather than typing it yourself. The parallels to Chiang's concerns about writing are immediate and obvious. For decades, the same arguments made about writing have been made about programming: the line of code is where thinking happens. The struggle to get syntax right, to debug logic errors, to refine implementations—this is how programmers develop expertise. Outsource the code and you outsource the thought.

Yegge has 45 years of programming experience. He has written operating systems in assembly language, built platforms at Google and Amazon, shipped games and developer tools. If anyone should be defending the cognitive value of writing code at the line level, it's him. Instead, he describes a transformation:

One of the biggest surprises from the dinner yesterday was how many people all had the experience where they no longer write single lines of code... they're really just kind of prompting.

— Steve Yegge, Latent Space podcast

But here's what's crucial: Yegge isn't describing the end of thinking. He's describing thinking moved up a level.

Where Thinking Goes

Throughout the conversation, Yegge describes cognitive work that sounds nothing like passivity. He talks about watching "the shape of the diffs and the color of the diffs" to assess whether AI-generated code needs review. He describes learning to predict when agents will fail, recognizing patterns of suspicious behavior ("writing suspiciously too much code for this problem"), knowing when to intervene and when to trust. He talks about orchestrating multiple agents simultaneously, catching errors that crop up across systems, maintaining architectural coherence when each piece was written by a different process.

The thinking hasn't disappeared. It has transformed. Where a programmer once asked "how do I implement this function?", they now ask "is this the right approach? Does this solution fit our architecture? What are the security implications? Where might this fail at scale?" The junior engineer who Yegge describes—watching two PhD students work with AI—realizes that "engineer in a box is not too far off from knowing the right questions to ask."

Have you thought about scaling? Have you thought about security? How is your test coverage? I mean, engineers are going to all ask the same questions, right? And he realized that engineer in a box is not too far off from knowing the right questions to ask an LLM.

— Steve Yegge, Latent Space podcast

The questions are where the thinking lives. And asking good questions—the right questions, at the right time, about the right aspects of a problem—is not a lesser form of cognition. It may be a higher one.

The Sentence Fallacy

Chiang's argument assumes that thinking happens at the level of the sentence—that the cognitive work is in the word choice, the syntax, the local construction of meaning. But this is like arguing that a programmer's thinking happens at the level of the semicolon. Yes, there's thought in getting the syntax right. But the deeper thought—the thought that actually matters—happens at the level of architecture, design, system coherence.

Writers know this. The hardest part of writing isn't finding the right word; it's figuring out what you're actually trying to say. It's recognizing when your argument has a hole in it. It's noticing that your evidence doesn't support your claim. It's seeing that your third paragraph contradicts your seventh. These are structural problems, architectural problems—and they don't require you to personally type every sentence to engage with them.

When Yegge describes the skills required to work effectively with AI coding agents, he's describing genuine expertise:

You have to spend 2,000 hours with it. And that's not actually an exaggeration... Trust in this case specifically means before you as a user can predict what it's going to do.

— Steve Yegge, Latent Space podcast

Two thousand hours of learning to predict, evaluate, correct, redirect. Two thousand hours of building judgment. This is not the profile of someone avoiding mental exertion. This is the profile of someone developing a new kind of expertise—expertise that operates at a different level of abstraction than before, but expertise nonetheless.

Why This Matters

Chiang is worried about students who skip the struggle and thereby miss the learning. That's a legitimate concern. But the solution isn't to define thinking as sentence-level labor. The solution is to recognize that thinking can happen—must happen—at multiple levels, and that the higher levels matter more than the lower ones.

A student who uses AI to generate prose and then accepts it uncritically has learned nothing. But a student who uses AI to generate prose and then interrogates it—asks whether the argument holds, whether the evidence supports the claims, whether the structure serves the purpose—is doing the kind of thinking that actually matters. They're doing the work that Yegge describes: learning to predict, evaluate, redirect. Learning to ask the right questions.

Yegge's vibe coders aren't writing individual lines of code, but they understand code at a deeper level than many people who do. They have to—because they need to evaluate output, catch errors, maintain coherence across systems. The thinking moved up a level. It didn't disappear.

The same can be true for writing. The thinking doesn't have to live in the sentence. It can live in the argument, the structure, the evaluation. Students who learn to do that kind of thinking—to ask whether claims are supported, whether logic holds, whether structure serves purpose—are developing exactly the intellectual capabilities that Chiang says education should provide.

The danger isn't that AI removes thinking from writing. The danger is that we might define thinking so narrowly that we fail to recognize it when it moves.