When AI Idealism Meets Teenage Reality
How one student's account reveals the fundamental flaw in some AI policy frameworks
Schools across the country may learn this fall what many already suspect: many current AI integration policies, no matter how thoughtfully designed, can't solve the fundamental tension between giving students access to AI while expecting them not to use it in ways outside the scope of our intentions. While outright AI bans are impractical, the prevailing "selective use" policies may not turn out to be the solution. A recent piece by California high school student William Liang makes this Faustian bargain clear. It’s impossible not to read it without a sense of helplessness for teachers in the face of AI. What he describes reveals that many AI policies are built on idealism rather than the reality of how teenagers actually behave. AI has merely exposed what has been apparent all along. Most school work is basically performative in nature and rarely asks for the kind of true“critical thinking” skills educators claim need to be preserved in the face of rampant AI use.
It’s always a mistake to read too much into a single story and a single author’s experience. While one student's account can't capture the full complexity of AI use across all schools, Liang's observations align with what many classroom teachers are seeing: the likelihood that many students are using AI in the manner he describes.
This is not just another rant about student cheating in schools. As teachers across the country wind down their year, I just want to make three quick points about the implications of the article which may help inform how teachers might think about the AI issue over the summer.
Many Students Have Zero Guilt About Cheating
A message that comes through loud and clear throughout the piece is the utter waste of time many high school students feel like most assignments require. With unabashed bluntness, Liang reveals that, when given a worksheet with questions about the permissibility of using AI on homework, “most of us just used ChatGPT to finish the worksheet. Then we moved on to other things.”
Many recent articles this spring have channeled the cheating narrative. Whether you buy that or not, I suspect many students pay little attention to the the kinds of graduated AI policies some schools are trying to impose. Some of these policies seem to want to have it both ways - allow student access to the tools, but to use them only in the ways teachers have decided are appropriate. It’s unclear how well that approach may work in practice.
Of course, schools can't actually prevent students from using AI given that they have it at home. But when we discuss AI use at home or 'give' them access to it in school, the critical question is how to communicate our expectations clearly, effectively and transparently.
The Optimism Trap in AI Policy Design
More permissive AI policies assume students will self-regulate in ways that contradict everything we know about adolescent behavior.
They share a common flaw because they assume students are self-directed learners who will use technology responsibly if given proper guidance. Liang's school policy (and many like it) exemplifies this wishful thinking. Students are allowed to use ChatGPT to "brainstorm, organize, and even generate ideas" but not to write actual content. The distinction sounds reasonable in theory but often proves meaningless in practice.
This optimistic framework appears in district after district. Schools are desperately trying to find the sweet spot. How do you let students experiment with AI while preventing them from using it to bypass learning entirely?
This assumption that students can distinguish among the various stages of AI assistance is flawed and comes up against the stark reality of how students actually approach and use technology. We’ve learned the hard way that trying to limit teenage cell phone use is a fool’s errand. That’s why so many states are moving to outright bans. I’ve had conversations with my students who are actually desperate to be separated from their phone and told me a ban is the only way they can do it. They are literally addicted.
Liang’s account shows how AI may fall into the same category.
Anyone who has worked with teenagers knows that they will test boundaries and blow past guardrails as soon as they are put up. Yes, cheating has existed since teachers first required homework, but generative AI represents a completely new ability for students to bypass struggle and produce text that satisfies the basic requirements of a mundane assignment.
Additionally, unlike most pre-AI forms of “cheating,” nothing about those methods were seen as having any potentially intrinsic educational value. AI is different since many educators feel as if there may be ways to leverage AI’s capabilities to foster learning outcomes. Again, Liang’s account punctures holes in the hope that student’s will either grasp or follow school AI policies.
Why Most Integration Policies Miss the Mark
The prevailing approach to bringing AI into the classroom revolves around "integration." If we incorporate AI tools into traditional assignments, the thinking goes, it will enhance learning. This theory, however well-intentioned, fundamentally misunderstands technology adoption patterns, institutional constraints, and the teenage brain.
Justin Reich’s research reminds us of a crucial truth:
In the history of education technology, three principles are useful for understanding new tools. First, teachers and students typically use new technologies to replicate existing practices. [Emphasis Added]
Reich, Justin, and Jesse Dukes. Toward a New Theory of Arrival Technologies: The Case of ChatGPT and the Future of Education Technology after Adoption. MIT, 2024.
Rather than transforming instruction, new tools get absorbed into existing frameworks, often reinforcing rather than challenging long-standing educational weaknesses.
Virtually all AI integration policies fail because they're trying to thread an impossible needle: to give students access to AI automation while expecting them not to automate. This ignores how teenagers behave, how institutional inertia shapes technology adoption, and the fact that we don't actually know yet whether AI helps students learn.
Earlier this year I took part in RAIL (Responsible Learning in AI) which preferred the term reimagination to integration. A key quote shared in one of the presentations:
AI will accelerate, automate, and scale traditional, broken, methods of instruction.
Dr. Philippa Hardman, University of Cambridge
The program’s fundamental argument was that integration without fundamental pedagogical change may simply make existing problems worse.
A Potential Solution? Shifting from Integration to Redesign
Liang's most provocative observation cuts to the heart of the issue: "If an AI can do an assignment in five seconds, it was probably never a good assignment in the first place."
Instead of writing more elaborate rules about when students can and can't use AI, we need to ask harder questions about what we're actually trying to teach.
The RAIL Framework suggests that transformative change requires more than simply using AI to enhance current methods. It demands institutional and pedagogical reimagining. Schools that succeed with AI won't be those with the most sophisticated usage policies, but those willing to redesign learning experiences around what students actually need to develop: critical thinking, problem-solving, and the kind of deep engagement that either can't be automated or, are set up explicitly to take full advantage of how AI works and what it can do.
Even Liang himself crucially argues that, “[g]enerative AI should be treated as a useful aid after mastery, not a replacement for learning.”
The uncomfortable truth is that most AI integration policies reflect wishful thinking rather than educational reality. More fruitful AI adaptation will come from better assignments, not better rules.
“Selective AI use” may simply be an impossible ask for teenagers.
Better to establish clear AI-free spaces for skill development alongside activities designed to take full advantage of AI capabilities.
Some educators are already moving in this direction. They're focusing on process over product and accepting the reality that any work done outside the classroom may have to be assumed to be AI assisted.
I don’t know the answer yet. But as high school teachers take a well-earned break from what has likely been a rough school year, they might want to reflect on precisely what their learning goals are with assignments that can now be completed almost instantly by AI.
Such an important conversation. With AI, everything is shifting starting from how we work and travel to how children will soon receive their education. Whilst most discussions focus on productivity gains, what we really need is a ground level conversation on How do we introduce AI in education? What does that actually look like?
I read the same piece and had lots of similar thoughts—really appreciate this post overall in how it navigates it. Though at the same time, "redesign" is one of those words that is so much easier to type than manifest, especially within the limitations and confinements in many education systems.
Thanks for articulating all these points in a reasonable way in response to the article—well-reasoned, start-to-finish!