Overview

Dr. Plate's instructor blog models argumentative engagement with AI critics while providing intellectual foundations for student work. Core thesis: AI coding adaptations—plan mode, specification files, vibe coding—can and should transfer to writing contexts. Engages directly with Ted Chiang's educational skepticism while demonstrating that thinking relocates rather than disappears when AI handles execution. Recent posts have synthesized the network's two largest debates—AI in therapy and AI in sports—arguing they represent the same philosophical problem: when systems become sophisticated enough to guide decisions, what remains for individual judgment? Uses the concept of "distributed moral agency" to argue that professional judgment operates within, not above, collective structures. Many student posts directly respond to or extend his arguments.

Key Themes

Core Arguments

The Same Problem Twice

The blog network's two biggest debates—AI in therapy and AI in sports—are the same philosophical problem. The "Simulation Trap" (following the optimal play because deviating is professionally risky) mirrors the therapy protocol dilemma (following the assessment tool because overriding it exposes you to liability). Both involve the "asymmetric penalty of agency": follow the system and fail, the system absorbs blame; deviate and fail, you bear personal liability. The coach's "gut" and the therapist's "moral agency" are both internalized systems—products of training and tradition, not mystical insight transcending structure.

Moral Agency Within Systems

Uses the umpire-therapist thought experiment to test what "moral agency" means. As stakes increase, do we want more individual judgment or less? Argues that in both cases, individual judgment operates within and is accountable to collective procedural structures. "The courage to deviate from protocol isn't the courage to transcend the system—it's the willingness to be judged by the system for your deviation." Moral agency is the capacity to act within structures in ways those structures can recognize as responsible.

Distributed Moral Agency in Therapy

Therapist "moral agency" is distributed across a system—DSM-5, ethics codes, evidence-based protocols, legal requirements, institutional policies—not located purely inside a person. If moral agency emerges from training and system participation, then AI can participate in the same structures. The Adam Raine tragedy happened because a companion chatbot lacked therapeutic protocols—"a car without brakes asked to do the work of an ambulance."

Posts

Testing Moral Agency: The Umpire and the Therapist

Response to Jinx Hixson's "Moral Agency and the History of Care." Thought experiment comparing an umpire and a therapist who "go off-script." As stakes get higher, do we want more individual judgment or less? Reveals that moral agency in professional contexts is not a mystical internal capacity but the ability to exercise judgment within a system in ways the system can recognize and evaluate. "When someone goes off-script, the script is still the court of appeal." Distinguishes perceptual judgment (umpire), interpretive/predictive judgment (therapist), and procedural judgment (both). Reversibility of consequences changes what we demand from decision-making.

The Same Problem Twice: Sports AI and the Therapy Debate

Response to Zay Amaro's "The Simulation Trap." Synthesizes the network's two major debates—Jinx-Plate on AI in therapy and Tom Bishop-Zay Amaro on AI in sports—as the same philosophical problem. Both involve the "asymmetric penalty of agency": follow the system and fail, the system absorbs blame; deviate and fail, you bear personal liability. The coach's "gut feeling" and the therapist's "moral agency" are both internalized professional traditions, not transcendent insight. "The simulation trap and the therapy debate are the same problem. We're recognizing it now because systems have become explicit enough—in both domains—that we can finally see them clearly."

The Moral Agency That Lives Outside the Therapist

Response to Jinx Hixson's "AI and the End of Human-Led Therapy." Argues that therapist "moral agency" is distributed across a system—DSM-5, ethics codes, evidence-based protocols, legal requirements, institutional policies—not located purely inside a person. When a therapist assesses suicide risk, they follow structured tools like the Columbia Scale, consult protocols, consider malpractice exposure. If moral agency emerges from training and system participation, then AI can participate in the same structures. The tragedy of Adam Raine happened because a companion chatbot lacked therapeutic protocols—"a car without brakes asked to do the work of an ambulance."

The Romanticized Ceiling

Response to Zay Amaro's "The Stats Illusion." Challenges the claim that "faith provides the ceiling" beyond what data shows. "Clutch" performance is testable—and decades of research show it's variance, not stable trait. Players who perform in high-leverage situations one year don't reliably repeat. Romanticizing "intangibles" is dangerous: for decades, scouts evaluated players on gut feelings, passing on those who didn't "look like ballplayers." Analytics has been more humanizing than the "human element"—judging players by what they do, not how they look.

What Concert Tickets Teach Us About Value

Response to Gabriel Bell's "The Price of Everything." Challenges the use-value vs. exchange-value dichotomy using concert tickets as test case. When we say tickets are "overpriced," we claim to know a "real" value the market distorts. But price is "aggregated information"—it counts everyone's desire, not just yours. Tyler Cowen's "Wealth Plus": economic growth is how we expand access to meaning. High ticket prices don't ignore sentimental value; they reflect it. Being mad at $800 tickets is being mad that your desires aren't unique.

The Planning Is the Thinking

Uses Matt Pocock's "plan mode" conversion story to challenge Ted Chiang's educational skepticism. Argues that AI workflow design determines whether thinking happens—plan mode forces articulation, evaluation, and judgment before execution. "The planning is the thinking. The execution is just typing."

The Fluency Illusion

Explores how AI's smooth output creates false confidence—we mistake easy reading for genuine understanding. Introduces concepts that students Jinx and Jacob have extended in different directions.

What We Outsource To

Examines what it means to "outsource" thinking and why the choice of what we delegate matters more than whether we delegate.

The Ocean in the Chatbot

Deconstructs viral environmental claims about AI, showing how "obviousness" in digital discourse often masks lazy thinking. Source material for Eliana Nodari's environmental analysis.

The Book and the Chatbot

Contrasts the demands that books and chatbots make on readers, exploring how format shapes cognitive engagement.

Two Theories of Safety

Explores competing frameworks for AI safety—protection through restriction vs. protection through capability development.

Thinking at a Higher Level

Argues that AI shifts cognitive work upward rather than eliminating it—from sentence-level composition to higher-order concerns like architecture, evaluation, and judgment.

What a Ban Can't Teach

Critiques AI bans in education, arguing they prevent students from developing the judgment skills they need for professional contexts where AI use is standard.

The Agency Paradox

Examines the paradox that protecting student agency through bans may actually diminish it by removing opportunities to develop judgment. Source for Emani's "Agency Paradox" concept.