AI, Writing, and Work

The Agency Paradox

January 8, 2026 · Response to: Resisting AI for Writing Assignments

Dr. Rob Lively wants to protect student agency. In a presentation at Truckee Meadows Community College's AI summit, Lively argues passionately that AI "takes away human agency from the students"—that when students use AI tools for writing, they lose the opportunity to make their own rhetorical choices, develop their own professional identities, and struggle productively through the writing process. His concern for student autonomy is genuine. His proposed solution undermines it entirely.

The Argument

Lively's presentation covers substantial ground—environmental costs, copyright issues, hallucinations in legal briefs—but his core argument about writing pedagogy centers on agency and choice. He draws on composition scholarship to argue that writing is how students learn to think: prewriting creates "an architecture in their mind," revision requires "metacognitive re-evaluation," and disciplinary writing professionalizes students into their fields. When AI performs these tasks, students "learn neither of these things." They don't engage with material, and they don't develop the discourse that makes them anthropologists or biologists or lawyers.

Developing some thoughtful reflective writers who make choices is the purpose of creating writing choices in the first place, right? You know, you want people to make their own, you know, rhetorical choices to influence or to inform.

— Rob Lively, "Resisting AI for Writing Assignments"

Lively celebrates students who are "taking a stand" against AI themselves, writing articles in student papers and arguing that "they actually want to develop skills." He praises the growing "analog teaching" movement and presents student resistance to AI as evidence that learners themselves recognize the value of unassisted struggle.

His conclusion follows logically from these premises: "Consider not allowing AI for any writing task at your college or university."

The Problem

Here is the contradiction at the center of Lively's argument: he wants students to develop agency by removing their capacity to choose.

If the purpose of writing instruction is to develop "thoughtful reflective writers who make choices," then surely one of those choices is which tools to use. Lively argues that students should learn to make "informed choices in their writing"—about audience, organization, tone, evidence, structure. But his solution removes one category of choice entirely. Students cannot demonstrate thoughtful judgment about AI use if the decision has been made for them by institutional policy.

This is particularly striking because Lively celebrates student resistance to AI as meaningful. He notes that "students are also arguing against AI use on campus" and that they're "taking a stand that they actually don't want to use AI." But what makes that stand meaningful is precisely that it's a choice. A student who chooses not to use AI after weighing the tradeoffs has exercised genuine agency. A student who doesn't use AI because it's prohibited has simply complied with a rule.

Give them the agency, give them the ability to actually, you know, write and consider and think.

— Rob Lively, "Resisting AI for Writing Assignments"

The irony of this statement, coming immediately before a recommendation to ban AI across all writing tasks, seems to escape Lively entirely. You cannot "give" someone agency by removing their options. Agency requires the possibility of choosing otherwise.

A Deeper Issue

Lively's position reveals a tension in how we think about student development. He trusts students to make sophisticated rhetorical choices—how to address an audience, how to structure an argument, how to develop a professional voice—while simultaneously arguing they cannot be trusted to make choices about their tools. The framework treats adults as capable of judgment in one domain while treating them as incapable in another.

This paternalism is not unique to Lively; it pervades much of the discourse around AI in education. The assumption is that students, if given the option, will simply outsource their thinking—that they lack the judgment to use tools appropriately and must be protected from themselves. But this assumption is itself a failure to treat students as the developing professionals Lively says we should be cultivating.

In The Case Against Disclosure, I argue for what I call "ethical opacity"—the principle that creators have a right to control their methods and be judged on their final products. This doesn't mean anything goes. Writers remain responsible for the accuracy of their claims, the quality of their arguments, the honesty of their conclusions. But how they arrived at those conclusions—what tools they used, what conversations they had, what processes they followed—is their business.

Lively's position goes further than even traditional plagiarism policies, which don't dictate how students must write, only that they don't submit others' finished work as their own. A ban on AI use for "any writing task" regulates process itself, requiring not just an original product but a particular method of production. This is surveillance of the creative act, not evaluation of its results.

Why This Matters

The question of how to handle AI in writing instruction is genuinely difficult. Lively raises real concerns: students who never struggle with prewriting may not develop organizational thinking; students who never revise their own prose may not learn to hear their own voice; students who never grapple with disciplinary conventions may not internalize professional discourse. These are legitimate pedagogical worries.

But the solution to these concerns cannot be to eliminate choice. If we believe writing develops thinking—and Lively clearly does—then we should design assignments that require thinking, regardless of what tools students use. If we believe students need to struggle productively, we should create contexts where struggle is meaningful, not simply ban the tools that might reduce it. If we believe students should develop professional judgment, we should give them opportunities to exercise judgment, including judgment about when AI helps and when it hinders.

A student who learns to use AI as a thinking partner—to push back against its suggestions, to recognize when its outputs flatten their ideas, to choose deliberately when to accept its help and when to refuse it—has learned something more valuable than a student who simply never had the option. The first student has developed a skill that transfers to a world where AI exists. The second has learned only to comply.

Lively is right that we should "give students the agency to actually write and consider and think." The way to do that is to trust them with choices, not to make the choices for them.