Will AI Drive the Classroom Now?
How to Make Sure We're Designing For Students and Not the Tech Driving the Hype
This fall, a growing number of teachers are re-configuring their curriculum with an eye towards being AI-aware. They are anticipating its misuse, building assignments to accommodate it, or centering instruction on how to use it. It’s an admirable and overdue instinct. But before we rewire our classrooms entirely, we need to return to something much more fundamental: What do students actually need to learn?
First Principles and Key Questions
What skills and habits do students actually need to thrive in the world they’re entering, not the ones teachers grew up in?
Are those still the same core competencies we’ve always taught, or is something fundamentally different now?
If students will live in a world where AI can do many of the things humans can do, what should they still know how to do themselves?
Should we bring AI into the classroom to reflect that reality, or resist it to preserve something essential?
If we do bring AI in, what exactly are we teaching? Literacy, meaning how it works and why it matters? Or fluency, meaning how to use it effectively? Or both?
Who is responsible for teaching that? Should it be woven across subjects or given its own course?
Are we trying to help students produce better work, or help them become better thinkers?
And the hardest question: can we still ask students to learn the "old way" while also expecting them to master new tools?
I’ve been wrestling with some version of these questions for more than two years and I’m still not even sure whether I’m asking all the right ones.
AI-Aware ≠ AI-Centered
Teachers need to be AI-aware. That means familiarity with the tools, knowing how students are using them, and keeping abreast of how fast things are changing. But there’s a difference between being aware of AI and centering it. That distinction is getting lost, either through administrative fiat or reactionary overcorrections.
We risk mistaking AI as a tool for the goal.
The push to integrate AI into classrooms is understandable. But when we design courses around it, we let it drive the pedagogy, shape the assignments, and define the outcomes. It pulls our attention away from the real work: helping students think more clearly, grapple with ambiguity, and wrestle with ideas that matter.
The current edtech hype makes it easy to forget that most of what happens in a great classroom has nothing to do with AI.
The Real Tension
One reason so much of the current AI conversation is framed around cheating is because the system has always emphasized product over process. For decades, students have known that the paper, the project, and the test almost always determine the final grade. Feedback, if it comes at all, often shows up only after the fact.
This creates the perfect environment for AI to disrupt. The system rewards a superior work product over the curiosity, messiness, and struggle which are the very things that drive real learning. We love to think students share our ideals about education, but when the outcome matters more than how it’s achieved, why wouldn’t they take the most efficient shortcut?
AI at the Center: Three Experiments in Redesign
Educators across the country are experimenting with how to teach in the age of AI. It seems as if almost every school and every teacher has a different approach. There is no such thing as “best practices” at this stage, even with people who are ahead of the curve. Integration itself is a fraught term1 but there’s a difference between integrating a new tool and redesigning your entire pedagogy around it.
Here are three examples of how teachers and schools are, intentionally or not, putting AI at the heart of the classroom. Each approach raises important questions. What does it mean when we ask students to “sound human” while also requiring them to use machines? Can we honestly offer AI “choice” from a position of ambivalence or opposition? And what are the implications when an entire school redesigns itself around the assumption that speed, optimization, and personalization are the very essence of learning?
AI as The Default
One recent back-to-school post from a community college professor offers a confident and comprehensive vision for teaching with AI that is grounded in two years of classroom experience. It’s a clear attempt to move past panic and toward structure, scaffolding student use of AI with ethics, transparency, and direction. But it also raises questions about what happens when the use of AI shifts from being permitted to being expected.
When describing her grading criteria, one line in particular caught my attention, especially because it redefines what “good writing” means in this classroom:
At this point, we expect our students’ papers to be flawless in spelling, grammar, and mechanics. So, we took that off of our rubric. We substituted the following:
They followed all the directions. Everything was correctly done and correctly formatted.
They payed attention to voice, tone, and audience awareness.
They had a transparency statement and metacognitive reflections discussing their process.
Everything was factual and backed up with real evidence that actually exists.
Their work showed excellent use of AI tools such as image generators, tutors, and editors.
It’s clear from her post that the assignments are thoughtfully scaffolded. The metacognitive component she introduced - what she calls “reflective footnotes” - is genuinely smart. But the overall rubric gave me pause.
If every paper is expected to be polished and mechanically flawless, then, unless I’m misreading, isn’t the tool no longer optional, but mandatory? And is this for every assignment?
And what exactly counts as “excellent use of AI tools”? Is it about formatting? Following directions? Producing clean, grammatically perfect output? Without specifics, it’s hard to tell.
Just before outlining her rubric, she identifies five “skills” students need to survive in the Age of AI: follow directions, sound human, transparency and ethics, check facts, and be efficient.
It’s a revealing list and one that echoes a familiar trap. When efficiency takes precedence over curiosity or critical thinking, and “sounding human” becomes a measurable outcome, the focus shifts from writing as process to writing as product.
Her approach shows how even the most thoughtful, well-intentioned educators can stumble into unintended consequences. The line between teaching students how to use AI and ultimately requiring it is blurrier than many realize and underscores just how difficult this is going to be.
This rubric definitely documents the writing process, but it still seems to treat it as something to optimize rather than struggle through.
AI Tracks
A similar experiment attempting to deal with AI that raised some questions for me comes from another college professor piloting AI Tracks.
In her course, students choose either an AI-Free or AI-Friendly track which requires they reflect on their use of AI if and when they engage with it.
What stood out was her decision to move forward on the model even after another instructor tested it last spring and found it didn’t work as planned.
That teacher admitted she had assumed students would use AI the way instructors often imagine: through deliberate prompts and structured dialogue on platforms like ChatGPT, resulting in text that can be cited, paraphrased, or cleanly incorporated into their work.
But as she put it, “the reality is much more slippery.”
That seems to be the rub. Many instructors, even when designing with care, are still assuming idealized visions of student behavior. Actual student AI usage often defies and challenges our expectations. Unless our policies drill down to reflect that reality, which may include actual demonstrations of what we’re seeing and talking about, we’re setting ourselves up for disappointment and confusion.
Despite that cautionary tale, the professor who designed AI Tracks is moving forward with it this fall. Her rationale is thoughtful, even admirable in its intent to give students agency. She writes:
They should know these things so that they can make their own choices about when, where, and how they allow AI into their lives … (I, personally, refuse them, but I have my own reasons.) I’m just saying students should have the knowledge to be able to make their own decisions … (emphasis added)
But here’s the tension: she’s asking students to navigate AI wisely and ethically while making it clear that she’s opted out of it herself. It’s a curious move, and one that makes me wonder whether the model can succeed without deeper familiarity with how these tools actually function in student hands.
Even the title, AI Tracks, subtly shifts the frame. Instead of naming the model around inquiry or skill-building, it prominently places the tool front and center. It risks becoming the lens through which everything else gets filtered.
An Entirely Different Model
The most radically AI-centered educational experiment I’ve come across is Alpha School which was just profiled in the NY Times and championed by co-founder and influencer Mackenzie Price (whose Notes may have floated across your Substack feed).
Alpha is doing more than just integrating AI into the classroom. It’s reconstructing the entire learning experience around the assumption that it can individualize and accelerate learning better than any teacher.
Here’s the thumbnail description from the Times:
At Alpha’s flagship, students spend a total of just two hours a day on subjects like reading and math, using A.I.-driven software. The remaining hours rely on A.I. and an adult “guide,” not a teacher, to help students develop practical skills in areas such as entrepreneurship, public speaking and financial literacy.
While the Times profile paints a sleek, Silicon Valley-style portrait of disruption, a much longer and more detailed essay complicates that picture and offers deeper insight into the school's philosophy, pedagogy, and goals well beyond just AI.2
Still, Alpha pushes AI to one logical extreme. For critics, it’s the nightmare scenario: kids glued to screens, teachers sidelined or replaced, and education reduced to speed and output.
For supporters, it’s a bold experiment in personalized learning and one that might surface new possibilities for mastery, motivation, and student agency.
Either way, it reframes the conversation entirely. Alpha School is about AI reengineering the premise of school itself.
Final Thoughts
What these three approaches all show is that we’re in the middle of a live experiment. Some educators will continue to opt out of AI entirely. Others will try to build thoughtful guardrails to guide student use. A daring few will rebuild their entire courses or institutions around the tech.
None of these choices are definitively right or wrong. What matters most is that they’re being made deliberately, transparently, and with students in mind.
The pace of AI development isn’t going to slow down, and neither are the pressures on classrooms. That doesn’t mean we have to follow the technology’s lead. The best teachers will keep asking the harder questions: What do students really need? What kind of thinking should school reward? And how do we design for learning, not just adaptation?
There’s no one-size-fits-all answer. But the critical principle worth defending is that being AI-aware doesn’t mean letting it drive the conversation.
As I’ve written previously, the term “integrate” itself is inelegant, with one observer noting that, if we simply choose to integrate existing practices, “AI will accelerate, automate, and scale traditional, broken, methods of instruction.”
In the piece , an early Alpha School parent goes into granular detail about their decision to choose Alpha, including his investigation of one of Alpha’s main claims that students progress through core curriculum at roughly twice the speed of their peers. If true, it’s a compelling data point. But whether this model is revolutionary or just another example of hyped edtech optimized for efficiency over depth remains an open question.





Well, with Microsoft making its play and Google already in some school districts, it does seem like that might end up happening. It's going to be a hot mess. But that's what happens when you implement huge systems too fast. Mass chaos.
This capture what I've seen in corporate AI adoption, the same pattern of mistaking the tool for the goal.
In my work helping engineering teams adopt AI, the breakthrough comes when we flip the question. Instead of 'How do we integrate AI?' we ask 'What kind of thinking do we want to preserve and amplify?'
Your Alpha School example is fascinating, 2 hours of AI-driven core subjects feels like optimizing for the wrong metric. Speed isn't learning. What if those extra 6 hours weren't just 'practical skills' but space for the kind of messy, unoptimizable thinking that AI can't do?
The rubric example hits something: when we expect 'flawless' output, we're training students to optimize for AI's strengths rather than develop their own. That's not augmentation, that's replacement.
What would curriculum look like if we designed it around what humans uniquely bring, curiosity, ambiguity tolerance, meaning-making and used AI to handle the friction that prevents deeper work?