The modern classroom is in crisis. Traditional assessments, especially those requiring written work, are undergoing a seismic shift. Teachers and educators are at a crossroads. Recent research shows a significant uptick in student AI use, especially over the past 12 months.1 I've watched technological changes come and go, but nothing has disrupted the fundamental teacher-student relationship quite like generative AI. What follows is an Open Letter to Students.
An Open Letter
One of the most significant questions going forward in your academic career will be whether to use AI for your school work.
Some of you have already answered that question. You started using AI the moment you discovered it could handle virtually any homework assignment. Whether you submitted the AI response as your own or simply used it to clarify something you didn’t understand, you jumped on the AI bandwagon and have no intention of getting off. The ease with which AI can help you in school outweighs all other considerations. Whether or not it helped you learn the material is beside the point. AI is a lifesaver, the gift that keeps on giving.
For others, you have concluded that you want nothing to do with AI. Either due to a fear of being caught cheating or for more philosophical reasons, you’ve decided it is not for you. That’s a perfectly rational decision.
Most of you are somewhere in the middle. You've used AI but were disappointed so you're not inclined to use it again. Or perhaps you’ve experimented a bit and remain curious. Maybe you don't know much about it and are waiting for more guidance from teachers. There are dozens of reasons contributing to your uncertainty.
As teachers, we face the same situation. Some quickly embraced AI for its potential to assist with work-related tasks, while others were put off by its tendency to "hallucinate” answers or its turgid writing style. Some educators denounce it as an unwelcome technological overreach, especially due to the bias in its training data, environmental impacts, and an overall distrust of corporate motives. Most are too busy to investigate further, and many find themselves primarily focused on detecting AI-generated work rather than exploring its possibilities. Despite some efforts by schools and teachers, the cheating conversation still dominates discussions about AI in schools.
Over the past 2 years I’ve spent considerable time exploring these tools, reading research, and reflecting on what it all means for education. Here’s what I think you should know.
What You Should Know About AI
AI isn't just ChatGPT. It's been around for years. You've encountered it in Netflix recommendations, Google Maps, and social media algorithms long before ChatGPT appeared in 2022.
What’s changed recently is the emergence of generative AI. These systems and Large Language Models (LLM’s) are capable of creating seemingly original text, images, or sounds from simple prompts. This instantaneous ability to create something from scratch is what’s causing such an upheaval in education.
The AI landscape is changing rapidly. New developments or AI model releases happen weekly, adding features, upgrading performance, and advancing our understanding of just how far these models may go. From significant leaps like advanced multi-modal capabilities to the recent introduction of Deep Research models, AI technology evolves constantly.
Nothing at the moment suggests the pace of change will slow down anytime soon. If your first thought about AI is based on the introduction of ChatGPT in 2022, you need to understand how much AI has improved and how it has penetrated just about every aspect of knowledge work.
You also know by now that your peers are using AI. A lot. That's clear whether or not you are one of those users. Most ChatGPT users in any given week are students. And the assignments teachers historically require are precisely the kind of tasks generative AI excels at.
Why Writing Matters
Teachers ask students to write to understand their thoughts, probe their knowledge, assess their understanding, and offer feedback on their communication skills. We live in a forest of language. Writing is the best way to exchange that information. Unlike presentations or discussions, the written word allows you to reflect and refine your ideas independently. Not everyone is comfortable speaking, either in front of a class or face-to-face with a teacher. Written work serves a double purpose - it lets students showcase their knowledge and lets teachers evaluate understanding and skill in a two-way process that requires thoughtful reflection.
As long as education involves measuring student progress - either through traditional grading or more process-oriented feedback - writing will have a significant role in schools. Student submission of written work and teacher evaluation of that work involves trust. Each side agrees to abide by its parameters.
The AI Dilemma
The introduction of generative AI models has fundamentally broken this transaction. Of course, cheating isn't new. Shortcuts have always existed, from research paper mills to copying friends' work or the textbook. But generative AI has upended the situation.
Why? First, you can now produce plausible and effective prose responses to any question regardless of subject matter or style. There’s no need to outsource. Effective cheating can happen within the comfort of your room in seconds. Need to finish econ questions for class? No problem. Write a short reflection on the reading? Simple. Come up with thesis statements, analyze a passage, do physics problems, or write a research report? Done. Instantly.
The quality of the response depends on the user’s skill and ingenuity, but it’s almost always good enough to earn a passing grade. And it will only get better and easier to use.
But teacher’s aren’t stupid. If you’re a B student and an average writer, submitting a grammatically perfect piece with detailed examples from the reading, arranged in an ideally structured format will probably raise questions.
The problem? No two AI outputs are the same. Despite what you read or what schools or teachers tell you, there’s no perfect way to determine if a piece of writing is by AI. Savvy students run their work through multiple tools and proofread the final version, adding their own thoughts and ideas, some even editing the submitted draft. What do teachers do with these?
You should also know that there is a raging debate within education itself about AI’s impact on student writing. Without getting into the weeds, two opposite camps have emerged - those who believe AI is the worst thing to ever happen to the writing process and have an almost visceral reaction to any suggestion that it might benefit students and those who see the writing on the wall (sorry!) that AI is the future and we better figure out how to teach students about it as soon as possible. And, of course, most educators fall somewhere in the middle.
The Gray Areas of AI Use
What percent of writing must be done by AI to count as “cheating?” 5%? 20%? 50% or more? Reasonable people disagree and some teachers try to clarify these rules in their syllabus, but it’s a moving target. Students must understand the dilemma facing educators who confront obvious AI work. How do they know what you know? How can they fairly give you a grade if they suspect the writing, and the thoughts behind it, were completed by AI?
The spectrum of AI use ranges from obvious cheating to more thoughtful integration. Consider these scenarios:
At one extreme, there's the student who generates an entire piece of writing with AI, barely skimming it before submission. There's no meaningful learning happening here. Both teacher and student recognize this as cheating, though the student might believe they have "plausible deniability." This case seems straightforward, but enforcement remains challenging, especially if a student denies AI use even in the face of overwhelming circumstantial evidence.
Then there are the edge cases. Imagine a conscientious student who writes a paper entirely on her own, developing ideas, crafting arguments, and carefully proofreading. For a final polish, she uses Grammarly Pro (which incorporates AI) to check grammar, style, and clarity. The tool suggests some punctuation fixes, highlights passive voice, recommends word choices, and even offers to rephrase a few awkward sentences. The final paper remains fundamentally her work – her ideas, her voice – but contains AI-assisted improvements. She discloses the use of Grammarly in her submission. Is this cheating?
What if several of these AI-suggested phrases significantly improved the quality of the writing? What if the tool helped structure her conclusion more effectively? Does it matter the specificity of the class AI policy? Where do we draw the line?
And these are the easier cases. What about the vast middle ground? Using AI to refine a thesis statement, employing research tools like NotebookLM to identify relevant quotes, asking AI to generate counter-arguments to strengthen your position, or using it to locate specific information in lengthy texts to prepare for a class discussion? What about using AI to conduct research? New AI Deep Research models make it incredibly easy to find and analyze sources by searching the web and other accessible databases. All of these use cases blur the lines between assistance and thought replacement and none of them can be detected by teachers.
We’re in a world full of complex questions without straightforward answers.
Finding a Way Forward
If writing is a process and you use AI to assist while maintaining control and overseeing the final product under your guidance and critical thinking, isn't this the ideal? But there is a huge paradox here: to effectively use AI as a writing assistant requires already possessing the very skills you are supposed to be developing. How can you know which AI suggestions improve your writing and which ones dilute your ideas if you haven't yet mastered writing yourself? This is the legitimate fear many AI skeptics have - that you will bypass the essential developmental stages of becoming competent writers by leaning on AI before you’ve developed your own critical faculties. It’s a maddening cycle.
Teachers worry that we risk creating a generation of writers who can (maybe?) proofread AI output but can't generate substantive thoughts independently. The skills of drafting, revising, and refining one's thinking through writing are foundational to intellectual development. If AI assistance becomes a crutch rather than a tool, we may produce students who appear competent on paper but lack the underlying cognitive processes that writing has traditionally developed.
Yet at the same time, we cannot ignore the reality awaiting you after graduation. Adults are using AI in the “real world” and many of you will be expected to use it in the workplace. If you can't write, ask the right questions, prompt AI effectively, and use critical thinking to challenge output (even as it improves), then the whole project is pointless. That's what the AI optimists are saying.
So, should we carve out a no-AI space for student writing? If so, in what ways and for what kinds of assignments? When you do use it, should we punish you or explain how over-reliance on AI will damage your thought process and learning, and prevent you from acquiring key skills? Or some combination? Or do we change our assignments to mix in where use or analysis of output can aid our understanding of our writing? Can we as students and educators understand this is a pivotal moment and show the other side empathy, especially given that we didn’t ask for AI or receive any instructions for how to use it?
A Final Thought
I’ve been a teacher for 30 years. I love watching students develop confidence and I would hate to see AI undermine the confidence that comes from rising to a challenge, completing work independently, and taking pride in a final product worthy of all your effort.
As thinkers and writers, what concerns me most isn't whether you use AI. It's whether you're developing the critical thinking skills that no machine can replace. If that happens, we will all be able to leverage the strengths of AI to produce our best work combining the unique contributions of both.
My advice: Be honest. With yourself and your teachers. Ask lots of questions. Are these tools helping you think better, or are they thinking for you? The difference is everything.
I don't have all the answers. Neither do my colleagues, your parents, or the tech companies themselves. We're all navigating this new world together. But I do know this: your ability to think independently, critically evaluate information, and express your unique perspective remains more valuable than ever. Don’t surrender it to an algorithm.
Data about student use of AI can be hard to sort out and a lot of it is out of date. My anecdotal sense is it has shot up considerably over the past year - Deep Research reports produced by both Perplexity and Gemini have confirmed my suspicions.
I always appreciate your reasoned consensus-based approach to discussion. It resonates with me :)
This may be of interest to you:
https://open.substack.com/pub/garymarcus/p/this-one-important-fact-about-current