If there is one thing clear in the debate about AI and education, it’s this: administrators and teachers are not on the same page and students are adapting faster than their institutions. The response so far from academia has been late, disjointed, and mostly reactionary.
Meghan O’Rourke sums it up well in her recent NY Times Op-Ed:
The current situation is incoherent: Students are accused of cheating while using the very tools their own schools promote to them. Students know the ground has shifted — and that the world outside the university expects them to shift with it. A.I. will be part of their lives regardless of whether we approve. Few issues expose the campus cultural gap as starkly as this one. O’Rourke, Meghan. “The Seductions of A.I. for the Writer’s Mind.” The New York Times, 18 July 2025
While teachers and writers continue to debate mostly among themselves about the impact of AI on a generation of students, those very students are setting the terms of the conversation without us.
Administrators push for innovation without understanding the risks, in thrall to AI companies and other external pressures. Meanwhile, OpenAI and Google, like the credit card companies before them, are already hooking new users with free premium access instead of low interest rates.
The message to students? Go ahead and use AI but good luck figuring it out on your own.
This vacuum in leadership means schools will have only themselves to blame as student AI use takes deeper root in daily academic practice.
What the Numbers Are Telling Us
The Chronicle of Higher Education recently summarized data on student AI usage. The numbers tell several important stories if we want to hear them.
Student Use Is Widespread and Accelerating
That student use of AI has increased over the past two years is not news. Significant AI use among students is no longer speculative, but measurable. In just two years, usage has skyrocketed. Some studies now estimate that anywhere from 40% to 90% of students are using generative AI regularly.
Growth has been rapid. One dataset showed student use at 14% in 2023, to 36% in 2024, to 42% this year. Use increases with education level, with grad students leading the pack, followed by undergrads, then K–12. Majors matter as well: students in business, STEM, and the social sciences use AI more heavily than those in the humanities and, interestingly, with less guilt.
There’s nothing in the data to suggest this trend is slowing. If schools plan to curb improper AI use this fall, they’re facing an uphill battle.
But it’s not universal. A sizeable minority (between 15% and 25%) reject AI in education or refuse to use it at all. These students deserve our attention. If their peers are perceived to be gaining an academic advantage with AI, how long can this resistance last?
Many Students Say They’re Using AI to Support Their Learning. But It’s Complicated
While student use of AI to write papers is well-documented, it’s also clear from the data that most claim to use AI in “supporting” roles where their definition of “cheating” may be different than is traditionally understood. The majority say they use AI for things like research, brainstorming, grammar checking, or organizing their ideas. A smaller group - roughly between 25% to 35%, depending on the survey - admits to using it for full essay writing, multiple-choice tests, or getting answers to exams.
O’Rourke’s NY Times piece underscores just how thin the line is between “support” and “cheating”:
Students often turn to A.I. only for research, outlining and proofreading. The problem is that the moment you use it, the boundary between tool and collaborator, even author, begins to blur. First, students might ask it to summarize a PDF they didn’t read. Then — tentatively — to help them outline, say, an essay on Nietzsche. The bot does this, and asks: “If you’d like, I can help you fill this in with specific passages, transitions, or even draft the opening paragraphs?”
At that point, students or writers have to actively resist the offer of help. You can imagine how, under deadline, they accede, perhaps “just to see.” And there the model is, always ready with more: another version, another suggestion, and often a thoughtful observation about something missing. O’Rourke, Meghan. “The Seductions of A.I. for the Writer’s Mind.” The New York Times, 18 July 2025 (emphasis added)
O’Rourke describes her own use of AI and how seductive it is to off-load more and more of the writing process to an ever-eager AI assistant. If a Yale professor of creative writing has to actively resist the pull, how can we possibly expect 19-year olds with poor time management skills to make good choices around their use of AI?
Complicating this further: most students who use AI for coursework say they aren’t caught. One study found that 86% of students who used ChatGPT on assignments said their use went undetected. I’ve experienced this first hand in more candid conversations with students who acknowledge that their use of AI is far more prevalent than most teachers realize but not just for the kinds of tasks teachers fear most. In the current climate of secrecy, they do not feel safe discussing their own AI use. This is a huge missed opportunity.
Students Are Torn But Have Some Clear Preferences
Like almost everyone else thinking about AI and learning, students are also ambivalent. In one survey, over half viewed AI positively, while nearly the same number were concerned about its impact on learning.
Most telling are their overall reasons for turning to AI. While saving time (51%), improving their work (50%), and instant availability (40%) topped the list, more revealing responses offer other clues. Many students reported using AI so they could ask questions anonymously. They felt more comfortable getting feedback from a chatbot than their own professors. Let that sink in for a moment.
Mixed Messages from Institutions Are Making Things Worse
If institutions seem to be sending contradictory signals about AI use, it’s because they are.
In one survey, over half of students reported that at least one professor encouraged or required AI for an assignment. But 72% also said at least one of their professors banned it. Meanwhile, four out of five students believe their institutions haven’t integrated AI enough. Many want more clarity and support as opposed to simply limits.
Better and more thoughtfully designed policies are a positive step, but they're not enough. We need to actively involve students in discussions about AI use. Yes, we must be clear about what's permitted in our classes and what isn’t, but we also need to engage them in conversations about why certain uses help or hinder learning.
It’s also clear from the data that students don’t want AI to replace teachers. Most are desperate for guidance on how to use AI wisely without getting in trouble and while maintaining their learning.
There’s Still No Clear Pedagogical Consensus
One of the most troubling anecdotes in the Chronicle piece comes from Lorena A. Barba, a professor of mechanical and aerospace engineering at George Washington University.
When she provided her students with a chatbot trained on course materials, they simply copied and pasted exercises into the chatbot and submitted the answer given. If the answer was wrong, they went back and forth with the chatbot until it gave the right one. Barba discussed the proper uses of the chatbot with students in class, and asked them not to copy and paste assignment questions. But, she writes, “they did not heed my advice and seemed unaware that they were harming their learning.” When she tried pulling back on AI use later in the course, the students were furious, as they had become reliant on it. “In retrospect,” she concludes, “they needed much more guidance on how to use AI in a way that is conducive to learning — I thought with some live demos and plenty of spoken advice they would get it. It didn’t work.” (emphasis addded)
I suspect there are a hundred other stories like Professor Barba’s - leaning into AI assisted activities seems like a good idea, but unless they are thought through carefully and tested, students are not likely to use the tools in the way we intend. What were the “proper uses of the chatbot”? She doesn’t say.
I don’t mean to pick on Barba. Without taking risks and experimenting, it will be difficult to figure out a path forward. Creating a school culture where teachers can openly discuss both their AI successes and failures requires moving beyond the current climate of shame, secrecy, and dismissive criticism that prevents the kind of collaborative learning educators need to do this work well.
Students Need Adults Who Know the Technology They're Regulating
The growing reliance on AI, the confusion about how to use it, and the mixed signals students receive all point to an urgent need: adults - faculty, staff, and administrators -- who are equipped to lead these conversations with clarity, humility, and credibility, and ideally with a more unified voice.
There are a few educators who are doing this work, several I’ve mentioned repeatedly in my posts. Anna Mills is another, whose innovative prompts on AI writing feedback help students think ethically and critically about AI. But these examples are still too few and far between.
Nothing will replace the conversations students need to have in real classrooms with real adults. And until those conversations become more common, we shouldn't be surprised if students keep turning to AI even when what they're really getting is the illusion of competence. It’s only going to get more complicated as AI-embedded browsers and agentic tools become mainstream.
Students are not confused about whether AI is part of their world. It’s going to be the water in which they swim if it isn’t already. What they’re confused about is how to use it ethically, effectively, and in alignment with their learning goals and those of their teachers. The reason for that confusion is simple: the adults in charge haven’t shown them.
We’ve handed students some of the most powerful information tools ever created and told them to “be responsible,” without offering anything close to coherent guidance or support. Almost three years after ChatGPT's release, we've run out of excuses.
The AI conversation has to be about pedagogy. And pedagogy requires presence, dialogue, and modeling. It requires teachers and professors who understand the tools well enough to explain them, question them, and know when to reject them.
I’ll conclude with a comment from the NY Times op-ed that jumped out at me:
“Starting this fall, professors must be clearer about what kinds of uses we allow, and aware of all the ways A.I. insinuates itself as a collaborator when a student opens the ChatGPT window.”
I absolutely agree with the second half of that sentence, but “allow”? It strikes me that we may have already abdicated that decision to students in the wake of our inaction. If we're going to have any hope of guiding them to use AI without sacrificing their learning, we'd better start soon.
The clock is ticking.
The real challenge is that most faculty aren’t using this. Let’s be honest: unless you’re actually in the thick of using these tools, reading about them, testing them, experimenting, trying stuff, you really don’t know what they can do.
There are still people who genuinely don’t believe AI can write a thoughtful, empathetic email. Or that you can use it to draft an article that sounds like a real person wrote it, emotional arc and all.
Well said. It's a shambles right now.
What I can't shake is my scenario that higher ed's reputation will take a serious hit over the next year. Public opinion will see the student experience as vitiated by cheating.