AI Conversations Behind Closed Doors
Student Focus Groups Reveal What's Actually Happening
As those of us in education wind down the semester, it’s hard not to be pessimistic about what the last twelve months have brought with respect to AI in schools. Though the tools keep improving, success stories are few and far between. Technological progress continues to outpace our ability to make productive use of it, at least in the classroom. There is still an enormous gap between the promise of AI for student learning and the reality.
To satisfy my own curiosity, over the past few weeks, I spoke with a handful of students on three separate occasions to find out what’s really going on. The ground rules were simple: complete honesty, no names, general observations over specific ones, and no reporting to anyone. I wanted to test my own suspicions - what are high school students actually doing and feeling right now. How and under what circumstances are they using AI? What would they recommend schools and teachers do about it? I wanted the truth, not manufactured and canned student responses. Fortunately, I got some volunteers willing to be candid.1
Earlier student testimonials from William Liang back in June and Sam Barber in August echoed many of the same kinds of observations shared by these students.
What I heard complicated any hopes I’ve had for how to deal with AI going forward.
A Caveat
My sample size is obviously small. I was able to speak with small groups of students from several different schools, but drawing broad conclusions from a few conversations doesn’t meet any standard of replicable research. Nevertheless, as someone reading and writing about AI and education for several years, much of what I heard resonated with what I’ve observed directly, heard from colleagues working in other schools, and lined up with my own conclusions about how students (not all, but many) have been absorbed into an AI-saturated academic world.
The 95% Figure
When I asked what percentage of students use AI for schoolwork at least once or twice a week, the answer was immediate across multiple conversations.
95 percent. Yeah. 95 percent
Whether this figure is precisely accurate is almost beside the point. Every survey since the spring of 2025 shows student usage trending upward. I can’t envision a scenario where that slows down. I think teachers are kidding themselves if they think the majority of students aren’t continuing to use AI in all sorts of ways - many of which we fear, but also for things where we might have a more open mind.
I followed up: How many use it even more frequently - three days a week or more?
60, 70 percent.
Another student pushed back:
I’d say a little bit higher than that because most people who have started dabbling in AI - they’re accustomed to how the AI is so helpful to them, and if they’re ever debating about getting their answer somewhere else, they know from past experience AI is here, it’s easy to use, why am I going through all this trouble?
This should not be news to anyone following the AI conversation and tracks with everything I’ve observed but secretly hoped wasn’t the case. AI use is now the baseline. Sure, there are likely some outliers and I wish I had found a broader cross-section of students to interview, but everything shared with me in these conversations rang true.
The Logic of Using AI
One thing that struck me was how differently students thought about their AI use.
One student was blunt about the calculus of using AI:
“I don’t really draw the line according to school policies because I know they won’t catch me.”
Later he revealed:
Student: So there are two categories. If you’re using it in the same ways as let’s say me, but you just don’t use it as frequently, then I think that’s your fault. You can use it as much as you want, at least the free one. Use it to the limit and nobody’s stopping you. But—
Interviewer: So you’re blaming the people who aren’t using it. You’re saying they should be using it more.
Student: They could do it. They’re actively putting themselves at a disadvantage by not trying to use this new tool that could help.
I’ve written previously about the challenges and pressure on the students “who don’t cheat.” While this is coming from a student on the other side of the fence, his observation that students not using AI are “putting themselves at a disadvantage” is telling. I doubt it’s uncommon.
Another student was more reflective:
I draw the line when I know that if I use AI in this way, it will hurt my ability to actually learn and interpret what I’m doing. Because the moment I do that, what’s the point of me even going to school?
I suspect many high school students simply ask the first question when determining the calculus of AI use: Will I get caught?
It’s heartening to know others are weighing the more critical question: Will I learn?
Both are operating rationally within a decision making framework, but only one produces the outcome educators want. It’s clear that neither approach has much to do with school policy. The first student has already concluded AI policies are rarely an effective deterrent - the odds of getting caught are so low it’s worth the risk. The second has internalized reasons that have nothing to do with the actual rules.
What both positions lack is any semblance of adult guidance. Students are primarily making these choices on their own.
Who Is Using AI?
One student explained a pattern he’d observed among peers:
The smarter students with AI have a certain line that they would not cross because they know this could really harm them in the long run. And I think that should be made aware to all the students because the weaker students have a perception that when they use AI, all they’re thinking about is getting the answer, not having to do the work, and that’s it.
This framing revealed something important. “Stronger students” weren’t using AI less, just more strategically. When I pressed on this, another student pointed out:
They’re [stronger students] using it in ways to enhance their own learning and pushing it in a way where you are conscious of not wanting it to -
Cognitively offload? I said.
Yeah.
The conclusion I drew from this exchange is not that stronger students don’t use AI, but that higher-performing students exercise restraint because they better understand the consequences if they rely on it too much. What these students were trying to tell me is they understand there are different ways to use AI and some of them are better than others.
Unfortunately, students who least need shortcuts are the ones showing caution while those that need more practice and scaffolding are either oblivious to the risks or willing to take their chances because they simply want to get the work done faster or believe using AI will improve their grades.
Why AI Beats Office Hours
I asked why students turn to AI instead of teachers.
I’d struggle with understanding some concepts... but GPT is at my fingertips. You need to schedule a meeting with the teacher. Then you need to write down the time, remember the date. If you forget to go, then that’s a really bad impression.
Adults may bristle at this line of thinking - it smacks of a lack of initiative and an unwillingness to take advantage of extra help when offered. But, candidly, how much extra help can most teachers realistically offer? Especially if they have more than 100 students? Can we really blame them when getting an answer to a question is instant, always available, and judgment-free?
I pointed out that students at well-resourced schools have access to teachers in ways many other schools don’t. That wasn’t enough.
[AI] is convenient, it’s easy, it’s decent, it works well for the most part - it’s just super easy to turn to. That’s the main thing. If it wasn’t like that and if it was much more complicated to use, I guarantee you less than 15% of people would be using it.
That last point is crucial - the ease with which companies have made AI available to students is a huge part of the problem.
Moralizing about rational student choices that avoid unnecessary friction is pointless. If we want students to come to office hours or meet with us privately, we need to be conscious of what Hollis Robbins dubbed the last mile problem - offering something AI can’t give them.
Platform Bans Are Theater
One student delivered a verdict on school attempts to block specific AI tools:
Even when schools ban ChatGPT they don’t really think about how people could just bypass it by using a different AI.
One student said they simply switched to Gemini.
Another was even more direct:
The problem with many schools in general is they think if they don’t have control over something and there’s a possibility of the students using AI in a detrimental and negative way, they will try to limit or prevent the students from accessing AI as a whole. And that isn’t going to be helpful because they won’t ever have total control over the students’ AI use.
Platform-specific bans address the wrong end of the problem. When schools ban the most visible tool, students will find workarounds within days, just further pushing the problem underground.
Dependency, Mastery, Or Both?
I asked directly:
Do you feel like you’re getting better at using AI, or becoming more dependent on it?
This was the most devastating and self-aware response of the day:
Dependent. For sure.
These students seemed completely aware of what’s happening to them.
Another noted:
For my own personal experience, I think it’s much easier for me to get the answer I want from AI now by guiding it with the right prompts than it was before. I think a big aspect of it - because I use it way more than I did last year - is how AI has been incorporated in daily life. For example, even if you do a simple Google search, Gemini is the first thing you see.
Both things can be true. Students are becoming more skilled at extracting value from AI while simultaneously becoming more dependent on it.
What do we make of the fact that students’ AI skill development might actually deepen their dependency on it?
The Writing Detection Arms Race
When I asked about AI surveillance monitoring tools that track how documents are composed, one student explained, again, what most teachers suspect but can’t do much about:
A lot of teachers thought initially that students just plainly copy-paste, which - yeah, it used to be the case, but nobody does that anymore.
The workaround is simple: split-screen, ask AI, retype by hand.
You can just add little bits of AI into your writing. Not everything has to be AI to sound smarter.
Many educators are hopeful that AI can someday help students become better writers. These conversations did not comfort me that was happening. For the most part, it still feels very much like a cat and mouse game.
Unsophisticated users might get deterred. Everyone else has adapted over the past two years if they want to use AI to help them write in classrooms where it is forbidden.
What Students Want from Adults
When I asked what teachers should do differently, the response wasn’t “let us use AI freely.” It was something more direct:
There should be required training for all teachers... I think the most important or the best way to get someone to not do something is to have them fully understand all the negatives that could happen to them if they use this.
Another student elaborated:
Teachers know students will use AI regardless of whatever they do. So they could actually explain to them why AI use could hurt them and how to responsibly use it.
I have no idea if more instruction will work. But if we don’t try harder, we’ll never know.
One student offered a closing thought I’ve been sitting with:
A lot of adults have this idea of AI and all these things about AI, and they think they’re so right about it. So I think they should be more open to changing their views on how another demographic - like students - may use AI, and avoid the stigma around it.
Where Do We Go From Here?
There were other insights that came out of our conversations - the difference between paid platforms and free ones, between using AI to brainstorm versus “doing the work,” and which subjects are best suited to getting help from AI. Once they opened up, students had a lot to say.
What was clear is that students are going to use AI. For any teacher heading into 2026 - young, old, experienced, or new - student AI use is as much a reality as the internet, cell phones, and 1:1 devices. We might be able to visibly keep it out of our classrooms, but it is baked into the learning environment, always humming in the background. We ignore that fact at our peril.
And though I understand the impulse (and am sympathetic to many of my colleagues), I am more and more convinced that bans, unenforceable policies, and surveillance methods to combat detrimental AI use are ultimately going to fail. The only viable path forward is to create learning opportunities that either demonstrate the limitations of AI or are deliberately designed to leverage its unique capabilities.
That does not seem to be happening in most institutions.
Where Most Schools Are Right Now
Despite a continuing lack of honest discussion and meaningful teacher training, more institutions are taking an “if you can’t beat them, join them” approach - either by actively bringing AI-integrated tools into their digital ecosystems or doing so by default. The companies themselves are targeting educators and students as primary customers while pushing out new ways that make it even easier for students to complete their work without any intellectual effort.
Marc Watkins recently dropped his latest warning, emphasizing that agentic browsers and other AI features can now complete entire online courses autonomously, even simulating STEM assignments that require students to “show their work”. His piece details the lack of accountability of the AI companies and the specific landmines just over the horizon.
Putting the Onus Back on Students?
The only glimmer of hope is that some students, in their attempts to self-police their own usage, understand where simply “getting an answer” or having AI do their work for them entirely, cannot possibly aid their learning. This relies on their ability to examine their own blind spots - something we know from every other risky activity involving teenagers and reduced executive functioning development is extraordinarily challenging.
I maintain that the only way through the mess wrought by AI companies with zero real interest in helping teachers and schools with this crisis is teacher-led role modeling. Either showing where AI might be an aid, or offering clear and demonstrated explanations of how AI is actively harming students’ development. For students who simply do not care or have no faith in schools, I don’t know the answer.
Authentic assessments where students can take pride in their own work is one potential response. Reserving graded assignments for only in-person activities or other observable performances is another. I fear even these will prove vulnerable soon enough - either through wearable AI products or other surreptitious ways to use AI for in-class work. The students I spoke with made clear this is already happening.
The New Year Ahead
What will 2026 bring? AI regulation at the state level? Unlikely, if Trump’s recent executive order stands up to legal challenges.2 More schools finding ways to bring AI tools into the classroom effectively or discovering ways to monitor improper student usage productively and not punitively? If the past six months are any indication, I’m not optimistic.
But here’s what I took from these conversations: Students can be thoughtful about AI. They have a legitimate perspective to share. Though many are likely using AI in ways educators deem harmful, some may be open to an explanation as to why. We need to engage and convince them.
That’s a slower and more arduous path than anyone wants. But it might be the only one that works.
Connect With Me
Beyond this newsletter, I work directly with schools, educators, and organizations navigating AI integration. Take a look at my website and reach out - I’d love to hear what you’re working on.
I’m grateful for these students’ honesty. Please don’t attack them in the comments. Let he who is without sin cast the first stone - it’s the height of hypocrisy to shame students for using a free tool that, if most adults reading this were truthful, they probably would have used when they were a teenager as well. As my interviews revealed, students are trying to navigate the current moment without much adult guidance beyond “don’t use AI!” Kids didn’t ask for generative AI nor did they consent to being targeted as the consumer demographic most likely to use it. More blame belongs with the tech companies than with students.
Despite the recent order, my home state of New York recently adopted “the strongest safety law in the U.S.” according to the WSJ. We’ll see if it holds up.



First off, a note of gratitude: student-centered pieces like this are essential—and aren't showing up nearly enough. It's not just a thoughtful piece, too, but rather a call-to-action, especially for those of us who are interested in developing our own thinking and skills. We need to be talking to students much more than we currently are.
Two other thoughts in response:
[1] I think this makes the experience in the actual classroom even more important, as that is not just the space that (for now) AI usage can be deterred/disincentivized most readily, but it is also where we have a chance to imbue the purpose of our learning. We can walk the walk as far as meaning as teachers—and walking that walk has never mattered more. (What I also think this means: the "homework" game becomes even more trepidatious as well as futile.)
[2] The point you made here about student usage/skill with AI increasing their dependence on it also matters a lot, as I think it cuts through some of the arguments that we need to embed AI tools and practices across all of our courses. I don't think school as a place where we lead with AI across the board will necessarily improve learning. (Not to mention: what does learning even look like in this new paradigm?)
Another clear takeaway, though: the abysmal lack of training and support for teachers has long passed the point of urgency and now is, quite frankly, embarrassing. Teachers have no chance until they know what they're doing, and it is the responsibility of policy-makers and districts to provide that support ASAP.
One big advantage of the "AI tutor" is that it never judges you, you can ask any question, no matter how "stupid". In comparison, some students may be deterred from asking the teacher if they think the question is stupid or too basic, or if they have fall behind in the material.