As long as an assignment involves gathering information, turning it into sentences, and presenting it for evaluation, AI can replace human work. You can make it harder. You can add guardrails. But if a student can do it on a laptop with Wi-Fi, so can an LLM. Reducing student reliance on AI will take more than rules. It demands serious imagination, clarity, and reinvention from educators.
The dirty little secret is this: there’s no such thing as an AI-proof assignment.
For the same reason generative AI has upended how we create and share content, any work that asks students to 1) access information (text, images, graphs), 2) process that information into something tangible (writing, pictures, code), and 3) submit it for evaluation (essays, reports, presentations), is vulnerable. AI can subvert that entire process.
Why AI-Proofing Is a Fool’s Errand
In 2025, any teacher assigning work to be completed outside of class (or even inside it) should assume that AI assistance will be the norm, unless they can persuade students otherwise.
Teachers are seeing AI show up in personal reflections, discussion posts, and even basic worksheets. Reading isn’t safe either. Tools like Google’s NotebookLM, useful in the right hands, are being used to avoid complex texts by distilling them into bullet points and superficial summaries. Students are prepping for presentations, discussions, and nearly every academic task with AI support.
Going forward, educators will need to be far more intentional about what assignments are actually for and what skills they’re meant to develop.
The Limits of Policy Alone
Rules still matter, and we can hope students follow them. But many don’t adhere to nuanced AI-use policies tied to assignments they don’t respect in the first place.
(I explored this mismatch between policy expectations and student motivation in an earlier post.)
No AI policy is bulletproof, but we still need them, even if they only serve to start the conversation and set classroom norms.
Even thoughtful policies like this one, which tries to draw lines between brainstorming and writing, can fall flat. Most students don’t parse those distinctions the way we hope they will. And if you haven’t spent real time using these tools yourself, it’s hard to know where the lines actually are. It's even harder to enforce them with any confidence.
If we’re serious about persuading students why doing the hard work first still matters, we need to use both the carrot and the stick.
The Stick: Resetting the Classroom Dynamic
The Baseline Writing Strategy
If you’re fed up with getting AI-written work, here’s a better opening move than simply issuing warnings and handing out policies.
In your first two classes this fall, try this:
Have students freewrite for 30 minutes in a blue book.
Give a simple, low-stakes prompt. Tell them it’s not graded — you just want to see how they write. Talk about the course, go over materials, review the syllabus. No mention of AI.Scan each piece and transcribe the writing using AI.
Let the machine do the heavy lifting. Create a document titled “Student Name – Baseline Writing Sample.”Run each sample through a custom GPT1
Use it to analyze voice, complexity, and clarity. Create a second doc with the GPT’s observations. House both in a student-specific folder.Return both documents before the second class.
Ask students to read the feedback. Then explain your AI policy.
You’ve flipped the script.
This strategy instantly reinforces your authority and demonstrates real fluency with the very tools students are trying to game.
It’s clarity over surveillance. You’ve quietly set a benchmark, shown your expertise with AI, and sent the message, without saying it outright:
“If you use AI to write your work, I’ll know.”
It’s not foolproof. But if this approach became common practice, AI misuse in classrooms could drop overnight.
Be careful. If you aren’t thoughtful about your tone and intent, the whole thing can feel like a trap. Instead, be transparent. Let them know you care about their growth as writers. That honesty lays the groundwork for trust and gives you a powerful foundation for real conversations when AI suspicion arises.
Getting caught is still the single biggest disincentive for students to use AI and you’ve made it crystal clear you expect students to do their own work.2
The Carrot: Clarity and Credibility
If you want to build real credibility with your students, start by redesigning your assignments with purpose. Not just because of AI, but because so many assignments weren’t that purposeful to begin with.
After thirty years in the classroom, I share the same observations as many educators I read and talk to regularly: students are more distracted, less confident readers, and more reliant on shortcuts than ever before. They’re not “worse.” They’re just living in a radically different world shaped by phones, fractured attention, algorithmic everything, and a global pandemic that rewired their entire approach to school.
So it’s no wonder that, handed with a magical tool that completes all their work effortlessly, they are tempted to use it. It’s a perfect storm. I wrote more about that shock in “AI is the Technology Schools Never Asked For.”
But that’s the moment we’re in. To be clear, the carrot isn’t permissiveness. It’s inviting students to engage with you in a learning process that acknowledges AI without surrendering to it.
The Fork in the Road
At some point, we need to stop just reacting and start actively deciding. Educators can either treat AI as a passing phase or begin redesigning their work for the world we actually live in.
Consider assigning at least one AI-permitted activity each semester. It doesn’t have to be complicated. Let students use AI to analyze an image, build an outline for a presentation, or generate study questions. Anything low-stakes where the goal is simply to observe how they use the tool. Then review what comes in, debrief, and listen.
(I wrote about one of these low-stakes activities and how students responded in My Experiment with Guided Reading)
You know what may shock you? Many of them won’t use it.
Above all, be crystal clear about which assignments you consider sacrosanct. These are the ones you want kept as far from AI as possible. Many teachers are bringing back blue books, oral exams, and grading only in-class essays for the purpose of determining final grades. That may work for awhile but is not sustainable long term unless we intend to air-gap our classrooms.
What you don’t want to do is this:
Barry Lam teaches in the philosophy department at the University of California, Riverside, … “Now students are able to generate in thirty seconds what used to take me a week,” he said. He compared education to carpentry, one of his many hobbies. Could you skip to using power tools without learning how to saw by hand? If students were learning things faster, then it stood to reason that Lam could assign them “something very hard.” He wanted to test this theory, so for final exams he gave his undergraduates a Ph.D.-level question involving denotative language and the German logician Gottlob Frege which was, frankly, beyond me.
“They fucking failed it miserably,” he said. He adjusted his grading curve accordingly.
Hsu, Hua. “What Happens After A.I. Destroys College Writing?” The New Yorker, June 30 2025.
Lam’s frustration, while understandable, wasn’t good pedagogy: it was punishment.
As Hsu writes, the prompt was “frankly, beyond me.” The result? A grading curve, a loss of trust, and no real insight.
Leaning Into AI?
Many innovative educators are doing remarkable work experimenting with student writing and integrating things AI does well. They are using it to iterate ideas, provide feedback on drafts, and experiment with alternative assessment practices.
Three thoughtful practitioners who I read regularly, Mike Kentz, Nick Potkalitsky, and Terry Underwood have spent time using AI with students and are extremely deliberate, experienced and knowledgeable about how they go about it.
I have also dabbled in bringing AI into some of my classroom practices for very specific purposes and, at least in theory, under monitored supervision. One thing I have learned is that no matter how carefully you design a lesson or activity, it can quickly morph into something you did not expect.
What Comes Next
But that creativity and experimentation is one of the reasons I love teaching and AI provides some exciting opportunities if you change your mindset from “resistance” to “aware.”
I understand why many teachers feel both reluctant and unprepared to bring (or even tolerate) AI aided work into their learning process. I get that people feel incredibly strongly about protecting space for struggle, failure, and practice, all of which turning to AI immediately prevents. I agree.
But I also feel strongly that taking back your classroom from AI means not only facing the reality of what’s happening, but also showing your students you understand how it’s challenging the very model of education.
Educators adapted to calculators, computers, and the internet. We’re going to adapt to AI as well. The question is how quickly and how well. We can’t outlaw it. We can’t ignore it. The winning strategy is reinvention - not of content, but of what it means to assess understanding in the first place.
I created one called Writing Fingerprint Analysis in about 15 minutes. Feel free to design your own. The prompt’s instructions embedded in the GPT can be found here. And here is a sample “writing analysis” for an anonymous piece of student writing from over a decade ago.
A growing trend in AI detection involves process-tracking software that logs keystrokes or tracks the full edit history of student writing. While I understand the impulse, I’m not a fan. Not just for privacy reasons (though it’s incredibly invasive), but because it signals a loss of pedagogical trust. Nick Potkalitsky offers a thorough critique of these tools that’s worth reading even if you’re inclined to use them. Personally, I think establishing a baseline writing sample is a more transparent, sustainable, and sane way to assess authenticity. And frankly, it’s far less soul-crushing than turning every assignment into a surveillance operation.
Thanks! I've been following the back and forth discussion about student use of AI for the past two years, and, candidly, it's still unclear to me where education as a whole is going to land (not larger society - I have one foot in the business / tech universe and many can't even believe we're still having this debate - they've moved on), but I do think walking this tightrope between trying to keep students away from using AI to understanding this technology in real time together is going to have to be bridged at some point. But the assessment piece is driving the conversation more than anyone is willing to admit. Where else but in high school and college do you get "graded" on your written work with a single letter? You get "graded" on your writing in real life based on whether you can persuade, inform, startle, surprise, or otherwise make an impact beyond yourself with words. That requires high quality ideas and clear communication. If AI can help you achieve these two things, it seems odd to me that it should be devalued. Your "grade" at work is more often dependent on your overall performance, which includes all the soft skills AI cannot replicate - empathy, collegiality, integrity, perseverance, respect, and on and on. But I appreciate the support and will check out your resources. Keep on keeping on.
Stephen, I don’t understand the purpose of your “blue book” assignment. Showing students that you “know about AI”? I think a lot of students also would be very upset that you the teacher used AI to evaluate them. I’m not sure I’m following what the purpose of this assignment is exactly. I’m searching for how to deal with AI in my college writing classes in English classes, and I would like to know more about your thinking here.