The Argument
This bibliography supports a sustained argument developing across the instructor blog: openness to risk is itself the safest approach to change. Vulnerability is not a sign that something went wrong. It is evidence that the situation has expanded — that there are more possibilities, more connections, more potential. The insecurity and the possibility are the same thing, experienced from different angles.
The argument emerged from a specific exchange. In “New Vulnerabilities” (February 9, 2026), Plate compared AI’s introduction into education to the beginning of a new relationship or the birth of a child. Neither event can be made safe in advance. The vulnerabilities they introduce — new anxieties, new failure modes, new forms of instability — are not problems to solve before proceeding. They are the proceeding. The person who refuses to start a relationship until they’re certain they won’t get hurt will never start a relationship. The institution that refuses to let students use AI until they’ve figured out how to prevent misuse will never figure out how to prevent misuse. The understanding they’re waiting for is produced by the engagement they’re forbidding.
Gabriel Bell responded with “New Dangers” (February 9, 2026), the strongest challenge the argument has received. Bell accepted the logic for personal relationships but argued it fails at scale: “A relationship affects a few people; a ubiquitous technology disrupts the status quo for millions.” He invoked the Precautionary Principle — when we introduce a new drug to the market, we don’t say “we’ll learn about the side effects by giving it to everyone and fixing what breaks.” Bell drew a sharp distinction between private risk (where spontaneity is forgivable) and systemic consequence (where thoroughness is mandatory). His concluding line — “We can afford to be impulsive with our hearts; we cannot afford to be impulsive with our future” — frames the counterargument precisely.
This exchange raises the central question the argument must answer: Does the logic of antifragility scale? Can a philosophy of personal bravery function as a philosophy of collective safety? The five sources below provide the theoretical architecture for answering yes — and for explaining why the instinct to answer no is itself part of the problem.
That instinct has a history. Roughly 150 years of dystopian fiction — from Mary Shelley through H.G. Wells, Aldous Huxley, George Orwell, and into the present — have given us a default narrative lens for technology that privileges fear, caution, and worst-case thinking. This genre tradition functions as a framing bias: it makes risks of action vivid while making risks of inaction invisible. The sources below address this bias directly, and Part 4 explores it at length.
Core Sources
Antifragile: Things That Gain from Disorder
Core Argument
Taleb’s central distinction is between three categories: the fragile (things harmed by volatility), the robust (things unaffected by volatility), and the antifragile (things that actually gain from stressors, shocks, and disorder). The typical goal in institutional planning is robustness — building systems that can withstand disruption. Taleb argues this is the wrong target. The right target is antifragility: designing systems that improve when they are stressed.
The human immune system is antifragile — it requires exposure to pathogens to develop strength. Bones grow denser under load. Muscles build through micro-tears. In each case, the attempt to shield the system from stressors doesn’t protect it. It fragilizes it. Children raised in sterile environments develop more allergies. Economies shielded from small recessions produce catastrophic ones. Institutions that prevent all failure produce people incapable of recovering from any failure.
Relevance to the Argument
Taleb provides the theoretical spine for “New Vulnerabilities.” The post’s claim that “the understanding they’re waiting for is produced by the engagement they’re forbidding” is an antifragility argument: the knowledge institutions need can only be generated through the exposure they’re preventing. Bell’s counterargument — that AI risk is systemic, not personal — actually strengthens the Taleb connection rather than undermining it. Taleb’s strongest examples are systemic: financial markets, economies, public health policy. His argument is precisely that large-scale systems are more damaged by the suppression of volatility, not less.
Connection to the Dystopian Thread
Dystopian thinking is itself a form of fragilizing. When a culture trains itself to imagine every possible worst case before engaging with a new technology, it is performing anticipatory shielding — exactly the behavior Taleb identifies as producing fragility rather than preventing it. A culture saturated in dystopian narratives develops what Taleb calls “epistemic arrogance” about the future — the belief that we can predict what will go wrong and prevent it in advance — which is precisely the belief that antifragile systems punish.
Go Deeper
Laws of Fear: Beyond the Precautionary Principle
Core Argument
Sunstein’s book is the most rigorous dismantling of the Precautionary Principle available. The Precautionary Principle holds that if an action raises threats of harm, precautionary measures should be taken even if some cause-and-effect relationships are not fully established scientifically. Sunstein demonstrates that this principle is literally incoherent: because all actions carry risks, including the action of not acting, the principle can always be invoked on both sides of any decision.
Sunstein identifies two cognitive mechanisms that explain why the Precautionary Principle feels coherent even though it isn’t. The first is probability neglect: when a potential outcome is sufficiently vivid or terrifying, people respond to the outcome itself rather than to its probability. The second is the availability cascade: when a risk becomes salient in public discourse — through media coverage, fictional narratives, or social amplification — it becomes cognitively “available,” causing people to systematically overestimate its likelihood.
Relevance to the Argument
Sunstein directly addresses Bell’s strongest claim. Bell argues that “the scale of potential harm outweighs the benefit of fast growth” and that AI should be treated like a new drug requiring rigorous pre-deployment testing. Sunstein would point out that Bell is engaging in probability neglect: the harms of engagement are vivid (security vulnerabilities, skill atrophy, job displacement), while the harms of non-engagement are invisible (falling behind, failing to develop new competencies, institutional rigidity). What is the “side effect” of not engaging with AI? What skills atrophy through avoidance? These are real harms, but they are invisible harms — and the Precautionary Principle systematically ignores invisible harms.
Connection to the Dystopian Thread
Dystopian fiction is the most powerful availability cascade in modern culture. 150 years of novels and films depicting technology-as-catastrophe have made the risks of technological engagement extraordinarily vivid. Everyone can picture the robot uprising, the surveillance state, the loss of human agency. But no one has written the dystopia of non-engagement — the story of a society that refused to adapt, that shielded itself from change until it became brittle and irrelevant. This asymmetry is not accidental. Dystopia is inherently a narrative of action-gone-wrong, never a narrative of inaction-gone-wrong.
Go Deeper
Searching for Safety
Core Argument
Wildavsky’s central distinction is between two strategies for achieving safety: anticipation and resilience. The strategy of anticipation attempts to predict dangers in advance and prevent them before they occur. The strategy of resilience accepts that dangers cannot be fully predicted and focuses instead on building the capacity to respond, adapt, and recover when things go wrong.
Wildavsky argues — with extensive historical and empirical evidence — that resilience consistently outperforms anticipation as a safety strategy. The reason is epistemological: complex systems generate novel failure modes that are, by definition, unforeseeable. A society that devotes all its energy to preventing every possible problem has no energy left for solving the actual problems that emerge. Trial and error is not a second-best strategy to be used when prediction fails. It is the primary strategy by which complex systems learn. The errors are not failures of the process; they are the process.
Relevance to the Argument
Wildavsky’s framework maps precisely onto the Plate-Bell exchange. Plate’s argument in “New Vulnerabilities” — “move forward, pay attention, fix what breaks” — is a resilience strategy. Bell’s counterargument — demanding “rigorous, thorough examination before ubiquity” — is an anticipation strategy. The students in the blog network are themselves evidence for Wildavsky. Jonas Rodrigues didn’t learn about the verification gap in AI-generated code by avoiding vibe coding; he learned it by engaging. Zay Amaro didn’t discover the limits of sports analytics by staying away from the data; he discovered them by immersing himself in it. The errors are the curriculum.
Connection to the Dystopian Thread
Dystopian fiction is anticipation-thinking turned into art. The genre’s fundamental move is to imagine a worst-case future in vivid detail. Wildavsky would argue that this narrative impulse, however compelling aesthetically, is epistemologically bankrupt. You cannot anticipate the actual dangers of a technology by imagining fictional ones. Orwell did not predict social media. Huxley did not predict AI. The dangers that actually materialized were the ones no one wrote novels about. Dystopian fiction gives us the feeling of foresight without its substance.
Go Deeper
Silence: Lectures and Writings
Core Argument
Cage’s Silence is not a conventional argument but a collection of lectures, essays, and experimental texts that embody a single radical principle: the surrender of intentional control as a creative and philosophical method. Cage composed music using chance operations — rolling dice, consulting the I Ching, using star charts. His most famous work, 4′33″, consists of a performer sitting at a piano for four minutes and thirty-three seconds without playing a note. The “music” is whatever sounds occur in the room.
The philosophical claim beneath these experiments is that our need to control outcomes is not wisdom but anxiety. We plan, compose, and design because we are afraid of what will happen if we don’t. Cage argues that what happens without our intervention is not chaos — it is a different kind of order, one that is richer and more surprising than anything we could have designed. “I have nothing to say and I am saying it, and that is poetry as I need it.”
Relevance to the Argument
Cage provides the aesthetic and experiential dimension that the other sources lack. Taleb, Sunstein, and Wildavsky argue that openness to disorder is safer. Cage argues that openness to disorder is richer — that the uncontrolled encounter with the unexpected is where meaning lives. Gabriel Bell’s argument that error is “the soul of the game” is a Cagean claim. Olivia Andresen’s “Heroism of ‘Enough’” — her argument that surrender can be more heroic than persistence — echoes Cage’s insistence that letting go of control is not weakness but a different kind of strength.
Connection to the Dystopian Thread
Dystopian fiction is the opposite of Cage’s method. Where Cage surrenders control and trusts what emerges, dystopia is the genre of control anxiety projected forward. The person who cannot sit through 4′33″ without discomfort — who needs the composer to impose order on the silence — is the same person who cannot encounter a new technology without immediately constructing a worst-case narrative about it. Both are manifestations of the same refusal: the refusal to let the unknown remain unknown.
Go Deeper
Against Interpretation and Other Essays
Core Argument
Sontag’s title essay argues that the dominant mode of engaging with art — interpretation, the search for hidden meaning beneath the surface — is a form of avoidance. We interpret because we are uncomfortable with direct encounter. Rather than allowing a work of art to affect us on its own terms, we translate it into something else: a “meaning,” a “message,” a conceptual abstraction that we can hold at arm’s length. “Interpretation,” Sontag writes, “is the revenge of the intellect upon art.”
She calls for “an erotics of art” rather than a “hermeneutics of art” — a mode of engagement that prioritizes sensation, presence, and direct contact over explanation and abstraction. The problem with interpretation is not that it is wrong but that it interposes a layer of conceptual apparatus between the viewer and the work, and that layer prevents genuine encounter.
Relevance to the Argument
Sontag provides the epistemological core of the dystopian framing critique. When people encounter AI through the lens of dystopian fiction, they are doing exactly what Sontag describes: interpreting rather than encountering. The dystopian framework is a pre-loaded hermeneutic — a conceptual layer that tells you what AI means before you’ve experienced what AI does.
This is visible in the blog network itself. Jacob Brunts’s “The Friction of Fluency” argues that most people who fear AI are imagining a novice workflow — “type prompt, get essay” — rather than the actual experience, which involves friction, auditing, and correction. Sontag would say: the interpretation is doing what interpretation always does — replacing the actual encounter with an abstraction, and then responding to the abstraction as if it were the thing.
Connection to the Dystopian Thread
Sontag’s essay, more than any of the other sources, explains why dystopian framing is so difficult to dislodge. Interpretation is self-reinforcing: once you have a framework for understanding something, every encounter with the thing confirms the framework. If you believe AI will cause skill atrophy, then every example of someone using AI poorly confirms your belief, while every example of someone using AI well is dismissed as an exception. The only escape is what Sontag prescribes: return to the surface. Drop the framework. Ask not “What does this technology mean?” but “What does this technology do when I use it?”
Go Deeper
Student Dialogue
Bell argues that error is “the soul of the game” — that automating officiating mistakes out of sports doesn’t produce fairness but sterility. Using Huizinga’s concept of the “magic circle,” he claims that games are human rituals designed for human participants, and that importing automated precision punctures the circle. His example of Maradona’s “Hand of God” — a “preventable error” that became one of sports’ most significant moments — illustrates that “error creates friction, and friction creates heat, light, and story.”
Bell’s claim that a game without mistakes has no soul is the aesthetic version of Taleb’s claim that a system without stressors has no strength.
Brunts dismantles the claim that AI makes intellectual work “easy.” He argues that fluency is the novice experience — that professional AI use is characterized by friction: spotting hallucinations, auditing tone, re-prompting with precision. The student who navigates a flood of synthetic information and curates coherent truth from chaos is developing harder skills than the student staring at a blank page. “The danger isn’t that AI makes things too easy. The danger is that we won’t train ourselves to do the hard work of managing it.”
Brunts is making the resilience argument at the level of individual practice: you learn what AI actually is by engaging with it, not by imagining what it must be.
Brunts argues that framing AI use as optional obscures the reality that students who don’t learn to integrate AI are being “actively left behind.” He distinguishes between “constructive struggle” (grappling with complex problems) and “empty labor” (formatting, summarizing, rote syntax), arguing that AI frees cognitive resources for the former. Students who opt out under the banner of authenticity are “disarming themselves before entering a battlefield.”
Brunts identifies the invisible harm that Bell’s Precautionary Principle ignores: the obsolescence of those who refuse to engage.
Amaro examines AI-driven injury prevention in sports — algorithms that flag fatigue levels and bench players before they’re hurt. He acknowledges the value but argues that the drama of sports depends on risk: “if the AI acts as a permanent ‘guardrail’ that prevents them from ever reaching the breaking point, do we lose the ‘clutch’ moments where a player pushes through pain to achieve greatness?” When the margin for error disappears, so does meaning.
Amaro’s insight that a “solved” version of sports is no longer worth watching parallels the argument that an institution that eliminates all risk before engaging has eliminated the engagement itself.
Andresen responds to Jacob Brunts’s “White Flag” through the lens of Disney narratives. She argues that the Disney myth — “victory is the only option” — obscures a deeper heroism: knowing when to stop. Using Cars 3 (Lightning McQueen surrendering his racing career to become a mentor), she argues that “surrendering a dream isn’t the end of the world — it’s the beginning of a different, perhaps more honest, life.” Some battles need to be recognized as over before growth can begin.
Andresen complicates the argument by insisting that not all risk-taking is growth. The antifragility argument does not claim all risk is good — it claims the capacity to engage with risk is essential.
Bishop examines AI’s impact on OCD treatment, arguing that 24/7 AI availability creates a “digital security blanket” that short-circuits Exposure and Response Prevention. In mental health, friction is not a bug but a “vital therapeutic feature” — the uncomfortable gap where the brain learns to tolerate uncertainty.
Bishop shows that the antifragility argument extends to psychological health: eliminating all uncertainty from the mind produces minds that cannot tolerate any uncertainty.
Debro argues that as AI makes technical perfection abundant, human imperfection becomes a luxury good. He coins “Proof of Human Work” — the slight tremor in a singer’s voice, the audible slide of fingers on guitar strings — as the new markers of value. Using the Brooklyn band Geese as a case study, he argues we are witnessing a “Great Re-valuation” where the machine sets the floor for what is “good” but raises the ceiling for what is “profound.”
When the artificial becomes abundant, the organic becomes antifragile — gaining value precisely because of its exposure to imperfection.
Nodari extends Gabriel Bell’s argument about creative friction to personal identity. She argues that social media’s “highlight reel” culture has created a fear of imperfection that predates AI. Drawing on Bayles and Orland’s Art & Fear (“Error is human. Art is error”) and Dweck’s growth mindset research, she argues that mistakes are not the opposite of success but its mechanism. “Becoming is better than being.”
The attempt to present a perfect self is a form of anticipatory shielding that prevents the growth that comes from visible imperfection.
Teismann argues that Tesla’s “Full Self-Driving” creates a lethal gap between marketed capability and actual capability. The system handles the easy 99% of driving and then hands control back to a bored, distracted human for the critical 1%. The “safety” branding produces the opposite of safety.
Teismann shows what happens when the appearance of safety replaces actual resilience — a negative example of the anticipation strategy destroying the adaptive capacity it claims to provide.
Rodrigues maps the divide between “Industrialists” (who treat AI code as disposable, focusing on orchestration) and “Conservationists” (who maintain deep understanding of what the code actually does). He argues the critical future skill is verification: “Writing code is free, but verifying code is expensive.”
Engagement is not enough — engagement must produce understanding. Opacity is fragile; systems you cannot inspect will fail in ways you cannot predict.
Brunts responds to Jeffrey Way’s “I’m Done” video — a web development educator who lost 40% of his staff to AI disruption but now says he’s “never had more fun programming.” Brunts frames this as reaching “the acceptance stage of grief regarding generative AI.” He identifies a “Great Bifurcation”: those who accept the reality and learn to wield AI as an exoskeleton, and those who refuse on moral, nostalgic, or fearful grounds.
A real-world case where the “move forward, fix what breaks” strategy produced better outcomes than the “wait until it’s safe” strategy.
Amaro examines the NFL’s “Digital Athlete” program — virtual twins used to predict injuries before they occur. He acknowledges the medical value but asks whether eliminating randomness eliminates the point: “if we ‘solve’ for injuries, are we turning football into a laboratory experiment?” The beauty of sports lies in the unevenness of the playing field.
Randomness creates meaning — a game where nothing unexpected can happen is a game no one wants to watch.
The Dystopian Framing Problem
There is a reason Gabriel Bell’s “New Dangers” feels so intuitive — so much like common sense — even though the evidence supports the opposite conclusion. The reason is narrative. We have been telling ourselves stories about technology-as-catastrophe for roughly 150 years, and those stories have become the default framework through which we encounter every new technological development.
The genre has a clear genealogy. Mary Shelley’s Frankenstein (1818) established the template: a creator who overreaches, a creation that turns monstrous, a catastrophe that follows from the failure to foresee consequences. H.G. Wells extended it to social scale with The Time Machine (1895) and The War of the Worlds (1898). The twentieth century intensified the tradition: Huxley’s Brave New World (1932) imagined technological comfort as a form of enslavement; Orwell’s Nineteen Eighty-Four (1949) imagined technological surveillance as a form of totalitarianism. By the late twentieth and early twenty-first century, dystopia had become the dominant mode of imagining the future — from The Terminator to Black Mirror, from The Hunger Games to Ex Machina.
This is not a neutral literary tradition. It is a framing bias — a systematic distortion in how we process information about technology. When someone encounters AI for the first time and their instinctive reaction is fear, that reaction is not raw and unmediated. It has been shaped by decades of narrative conditioning. The fear feels like clear-eyed realism. It is actually a genre convention.
Sunstein’s concept of the availability cascade explains the mechanism precisely. A risk becomes salient through social amplification — through stories, media coverage, cultural repetition — and once it is salient, it crowds out other considerations. Dystopian fiction is the most powerful availability cascade in modern culture. The risks of technological engagement (surveillance, job displacement, loss of autonomy, weaponization) have been dramatized so vividly and so repeatedly that they are always cognitively available.
Meanwhile, the risks of non-engagement have no genre. No one writes novels about the society that refused to adopt a useful technology and slowly became irrelevant. No one makes films about the institution that banned AI and watched its students fall behind. No one dramatizes the silent, invisible cost of skills never developed, capacities never built, adaptations never made. These costs are real — Wildavsky’s entire career was devoted to documenting them — but they are narratively invisible. They don’t produce dramatic confrontations or satisfying resolutions. They produce slow, diffuse, undramatic decline.
Taleb would frame this as a fragilizing process. A culture trained on dystopian narratives is a culture that has rehearsed anticipation until it has become reflexive. But anticipation, as Wildavsky showed, is the inferior safety strategy. Orwell’s Nineteen Eighty-Four predicted that technology would be used for totalitarian surveillance by centralized states. The actual surveillance problem that emerged was decentralized, commercial, and consented-to — people voluntarily sharing their data with corporations in exchange for convenience. The dystopia we got was not the one we imagined. It never is.
Cage offers perhaps the most precise diagnosis. Dystopian fiction is the anxiety of control projected forward — the desperate need to know what will happen next, to predict and prevent, to refuse to let the future arrive uninterpreted. It is the refusal to sit through 4′33″ — the insistence that someone must be composing, someone must be in charge, someone must tell us what the silence means.
And Sontag explains why the framing is so sticky. Dystopian narrative is not a hypothesis about the future that can be tested and discarded. It is an interpretive layer that organizes perception before perception occurs. Once you see AI through the lens of dystopia, every piece of evidence confirms the lens. The only escape is what Sontag prescribes: return to the surface. Drop the interpretive framework. Encounter the thing directly. Ask not “What does this technology mean?” but “What does this technology do when I use it?”
The students in this blog network are already doing this. They are encountering AI through use rather than through narrative — and what they are finding is messier, more interesting, and less catastrophic than any dystopia predicted.
The Dystopian Tradition — A Curated Library
The Dystopian Canon
Frankenstein
The template for every technology-as-catastrophe narrative that followed. The creator overreaches; the creation turns monstrous. Shelley’s insight — that the real monster is the refusal to take responsibility for what you’ve made — is consistently ignored in favor of the simpler lesson: don’t build it.
The Time Machine
Wells extends Shelley’s individual tragedy to civilizational scale. The Eloi — humanity evolved into fragile, beautiful innocence — are the product of a world that eliminated all stressors. Taleb would recognize them instantly: a species fragilized by comfort.
The War of the Worlds
Technology as invasion — superior force arriving from outside and overwhelming human capacity to respond. The original availability cascade for “technology will destroy us.”
Brave New World
Huxley’s dystopia is comfort, not oppression. Citizens are engineered for contentment and sedated with soma. The danger isn’t that technology will enslave us against our will — it’s that we’ll volunteer for the enslavement because it feels good. Connects directly to Dominic Debro’s “dopamine ceiling” work.
Nineteen Eighty-Four
The most influential dystopia ever written — and a case study in how dystopian prediction fails. Orwell imagined centralized, totalitarian surveillance. The actual surveillance state that emerged was decentralized, commercial, and consented-to. The dystopia we got was not the one he imagined.
Fahrenheit 451
Often read as a warning against censorship, but Bradbury himself said it was about television replacing books. The real threat wasn’t that someone would burn the books — it was that people would stop wanting to read them. An availability cascade about the wrong risk.
Do Androids Dream of Electric Sheep?
Dick’s question — how do you distinguish the artificial from the authentic? — is the question the blog network is asking about AI writing. Jacob Brunts’s “Friction of Fluency” is a practical answer: the distinction isn’t in the output but in the process.
The Dispossessed
Le Guin’s “ambiguous utopia” is the rare work that takes seriously the costs of both action and inaction. Anarres has no government — and also no freedom. The most Wildavskian novel in the canon: every safety strategy creates its own risks.
The Handmaid’s Tale
Atwood famously insisted this was “speculative fiction, not science fiction” — every element had historical precedent. A reminder that dystopian imagination draws from the past, not the future. The dangers it anticipates are always the ones we’ve already survived.
Never Let Me Go
Ishiguro’s characters accept their fate with devastating passivity. The horror isn’t the system — it’s the failure to resist. A Cagean text in reverse: the characters have surrendered control, but to an oppressive system rather than to possibility.
The Hunger Games
The dystopian narrative as entertainment spectacle — which is also what The Hunger Games itself is. Collins makes the audience complicit: you are watching children fight to the death for your entertainment, which is exactly what the Capitol does.
The Circle
The closest to a dystopia about our actual moment: a tech company that achieves total transparency. Eggers’s insight — that the demand for openness can itself become totalitarian — is the most Sunsteinian dystopia: the precautionary demand for information becomes the thing it was supposed to prevent.
Black Mirror
The most potent availability cascade generator in contemporary culture. Each episode provides a vivid, self-contained worst-case scenario for a specific technology. The cumulative effect: a generation trained to see every new technology through the lens of “what could go wrong.”
Ex Machina
Garland’s AI passes the Turing test by exploiting human loneliness. The film’s deepest insight: we are vulnerable to AI not because it is intelligent but because we are lonely. The danger is in us, not in the machine.
Critical Resources on Dystopia
Dystopia: A Natural History
The most comprehensive history of dystopian thought from antiquity to the present. Claeys traces the genre’s evolution from religious apocalypse through political satire to technological anxiety — the genealogy this bibliography draws on.
The Dystopian Impulse in Modern Literature
Booker argues that dystopia is not merely a genre but a mode of reading — a way of encountering the present through the lens of a degraded future. This is precisely Sontag’s point about interpretation: the dystopian mode shapes perception before perception occurs.
Archaeologies of the Future
Jameson argues that utopian and dystopian fiction are not predictions but symptoms — expressions of a society’s inability to imagine genuine alternatives. The dystopian imagination, for Jameson, is itself a failure of imagination.
Scraps of the Untainted Sky
Moylan traces the evolution from “classical” dystopia (which warns) to “critical” dystopia (which explores). The shift matters: a genre that merely warns reinforces anticipation-thinking; a genre that explores can model resilience.
The Missing Genre — The Costs of Inaction
The bibliography argues that the most dangerous dystopia is the one no one writes: the story of a society that refused to adapt. These works — from different disciplines — document what happens when institutions choose anticipation over resilience, when the fear of what might go wrong prevents engagement with what is actually happening.
Collapse: How Societies Choose to Fail or Succeed
Diamond documents civilizations that collapsed not from external attack but from internal rigidity — the refusal to adapt to changing conditions. Easter Island, the Maya, Norse Greenland: each chose familiar failure over unfamiliar adaptation. The invisible dystopia of inaction, documented in archaeological record.
The Innovator’s Dilemma
Christensen shows that well-managed companies fail because of their caution, not despite it. By listening to existing customers and optimizing existing products, they miss disruptive innovations until it’s too late. The Precautionary Principle as corporate strategy — and its catastrophic results.
The March of Folly: From Troy to Vietnam
Tuchman documents governments pursuing policies contrary to their own interests across four centuries — not from ignorance but from the inability to adapt to new information. The recurring pattern: leaders who knew what was happening and chose not to change. The invisible cost of institutional rigidity.
Why This Page Exists
This annotated bibliography demonstrates what AI enables — not just writing more, but engaging with intellectual traditions of real depth. The nexus of sources here — five academic works, a 150-year literary tradition, a dozen student blog posts, and the critical connections between them — would take months to navigate manually. AI made it possible to build this architecture in days, not because it replaced the thinking, but because it handled the scaffolding while the thinking happened.
The page itself is an argument: that the tools students are learning to use in this course are capable of producing work that is genuinely, substantively intellectual. The question is not whether AI can help — it is whether we will develop the skills to direct that help toward work worth doing.