Can the Humanities Survive Artificial Intelligence?
D. Graham Burnett's classroom experiment reveals why humanities education needs AI engagement, not avoidance
This morning I discovered The New Yorker’s weekend essay “Will the Humanities Survive Artificial Intelligence?”
I cannot do the piece justice in this short post - it needs to be read in its entirety - but I want to pull out three quotes from Princeton Professor D. Graham Burnett that resonated with me and reveal why so many people are missing the forest for the trees when it comes to the AI and education conversation.
The Academy in Denial
First, Burnett correctly highlights the unresolved tension between AI's transformative power and academia's reluctance to acknowledge it:
On the contrary, staggering transformations are in full swing. And yet, on campus, we’re in a bizarre interlude: everyone seems intent on pretending that the most significant revolution in the world of thought in the past century isn’t happening. The approach appears to be: “We’ll just tell the kids they can’t use these tools and carry on as before.” This is, simply, madness. And it won’t hold for long. It’s time to talk about what all this means for university life, and for the humanities in particular.
This institutional denial reflects what I frequently encounter when I talk to people about the impact AI is likely to have - is already having - on society in general, let alone education and schools. We are fixated on a very, very small piece of the problem - academic dishonesty - because, for educators, it is staring us in the face every day. How to grapple with the larger implications of AI as opposed to getting back to “business as usual” is almost too profound a topic to broach. That needs to change.
Critics often deflect conversations about the larger impacts of AI by fixating on AI's "hallucinations" and occasional factual errors. The irony is not lost on me. While we instinctively forgive human errors, biases, and limitations, we hold these nascent AI models to an unrealistic standard of perfection, as if our university libraries aren't filled with books, studies, and reports containing inaccuracies and mistakes. When AI actually stops making mistakes, that will usher in an entirely new conversation.
What's particularly revealing about institutional resistance is how the criteria for evaluation keep shifting. When AI struggles with a capability, it's cited as proof of its fundamental limitations; when it masters that same capability, suddenly that skill "wasn't really that important or significant or evidence of actual reasoning anyway." This pattern suggests that highlighting what AI cannot do serves more as institutional self-preservation than substantive criticism.
The truth is that the most advanced versions of AI already have staggering implications for knowledge acquisition and production regardless of whether they improve even marginally in the coming years. Here we have an eminent professor from inside the academy pleading with his colleagues to open up a more serious conversation than simply whether Suzy might have used ChatGPT to create her outline. It’s a good time to listen.
The Alien Familiar
In a second key observation from the article, Burnett describes a profound classroom experiment that revealed the strange new relationship forming between students and AI:
The assignment was simple: have a conversation with a chatbot about the history of attention, edit the text down to four pages, and turn it in.
Reading the results, on my living-room couch, turned out to be the most profound experience of my teaching career. I’m not sure how to describe it. In a basic way, I felt I was watching a new kind of creature being born, and also watching a generation come face to face with that birth: an encounter with something part sibling, part rival, part careless child-god, part mechanomorphic shadow—an alien familiar.
Professor Burnett goes on to describe, in vivid detail, the conversations his students had with AI about his course material. He quotes liberally from the discussions (not surprisingly, he has extremely thoughtful and articulate pupils) and marvels at both the sophistication of the AI’s responses as well as the ability of his students to push their thinking, using the chatbot’s strengths as leverage to advance their understanding of the subject.
Anyone who has had an experience like this with AI knows instinctively why these types of exchanges are so fascinating. I fear, more than ever after reading a piece like this, that AI might be just another way to further increase the education gap if students are simply left to their own devices without any training or instruction on how to use these tools effectively. These kinds of thought-provoking AI assignments are exceedingly rare for a whole host of reasons, but they cannot be done without real discussion and preparation ahead of time. Most of our teachers are not up for this task at the current moment even if they were willing to try (which many of them are not on philosophical, pedagogical, or other grounds.)
But students are already engaging with these powerful AI systems. How do they react when confronted with technology that can reproduce and, in many cases, surpass their own intellectual output? Burnett's experiment didn't just reveal the capabilities of AI. It raised disturbing and troubling essential questions for his students.
The Sublime Insight: Humans AND Machines
In the final quote I want to share, Burnett captures a classroom moment that reveals both the existential dread and philosophical hope AI presents:
When we gathered as a class in the wake of the A.I. assignment, hands flew up. One of the first came from Diego, a tall, curly-haired student—and, from what I’d made out in the course of the semester, socially lively on campus. “I guess I just felt more and more hopeless,” he said. “I cannot figure out what I am supposed to do with my life if these things can do anything I can do faster and with way more detail and knowledge.” He said he felt crushed.
Some heads nodded. But not all. Julia, a senior in the history department, jumped in. “Yeah, I know what you mean,” she began. “I had the same reaction—at first. But I kept thinking about what we read on Kant’s idea of the sublime, how it comes in two parts: first, you’re dwarfed by something vast and incomprehensible, and then you realize your mind can grasp that vastness. That your consciousness, your inner life, is infinite—and that makes you greater than what overwhelms you.”
She paused. “The A.I. is huge. A tsunami. But it’s not me. It can’t touch my me-ness. It doesn’t know what it is to be human, to be me.”
The room fell quiet. Her point hung in the air.
And it hangs still, for me. Because this is the right answer. This is the astonishing dialectical power of the moment.
I love the way Burnett winds down the piece. The rise of AI can be viewed as just another existential piece of dread heaped onto a generation that is already burdened with climate change, global instability, intense political polarization, and economic uncertainty. But how students - for that matter, anyone - choose to deal with its implications is personal. What emerges in this classroom exchange is an almost philosophical reckoning: AI seems to threaten our sense of self and competence while at the same time highlighting the irreplaceable nature of human consciousness.
The recent proliferation of generative AI tools presents us with a bewildering paradox. They are terrifying, unprecedented, and profound. They portend massive change and upheaval. Yet they also offer us a mirror in which to see ourselves more clearly. As Julia's insight reminds us, they aren't human. They are merely tools to be used for good or for ill. We are the only ones in charge of how we want to respond to the moment.
Thanks for sharing your insights and highlighting this critical piece.
Brilliant. Education needs to make the effort to shift the rock under which it is hiding.