This nails the quiet tension so many students feel but rarely say out loud: “We’re just doing what you taught us.”
Not in some abstract moral sense—but in a daily, visible way. When adults use AI, it's efficiency. When students do, it’s dishonesty. But both are using the same tool to navigate overloaded systems and high-stakes outcomes.
The question isn’t whether students should struggle through writing. The question is whether the struggle has purpose—or just weight. Because when process becomes performance, students treat it like any other task to optimize.
What if the problem isn’t that students are cheating—
but that they’re mirroring?
And what if the real discomfort is that they’ve learned from watching us?
We don’t need more restrictions. We need better conversations. Ones that treat students not as problems to police, but as co-authors of the future they’re inheriting.
The issue as I see it is we still have a huge number of people who want to close their eyes to all the ways the technology is changing the student experience. One of the more alarming (depressing, disappointing, choose your word) findings in the piece I referred to is that students would rather ask AI questions about the material than a professor ... during class! Unless teachers understand that's what is happening, I don't know how they address it. The student AI use is predictably going to take off in the coming years whether people like it or not.
I worked in higher ed for nearly 20 years. There were some incredible faculty who were genuinely invested in students. And then there were a few where, honestly, I’d rather ask ChatGPT than go to them.
Higher ed needs to face this. AI is exposing issues students have dealt with for years.
I teach high school English, and I was one of the writing teachers that you described objecting to the use of AI on philosophical grounds. That said, I have decided to set aside my personal convictions in order to understand this technology that is obviously going to redefine our understanding of productivity, creativity, and knowledge. I still do not use AI for my own writing, but I did develop enough trust with my students this past year to get some transparency as to how they use it to write.
Hearing from them and thinking about my own favorite uses for AI (I am a big fan of having it create rubrics, compiling comparisons of products for shopping, and helping me brainstorm ways to differentiate assignments) I found that students that try to use AI responsibly seem to use it to either A) complete tasks that they consider intellectual drudgery B) get reteaching for concepts they didn't understand at first, and C) do silly, stupid things with it just to see how it plays (we have to remember that for a lot of kids, AI is a toy).
I'm sympathetic toward employees that use it to write soulless emails. I am sympathetic toward students that use it to write essays where teachers are more interested in their mastery of skills than the so-called precious thoughts we encourage them to generate through their commitment to the writing process.
What makes me afraid at the K-12 level is that this technology completely changes the game for teachers and students. It needs to be met with heart, passion, drive, and competence. There are still a lot of good teachers out there, but the field is still exhausted from the pandemic. In my section of the state, we are one of the only high schools with an English department where each teacher is certified. Is this the best we can offer kids in what is a true moment of crisis in education?
Students deserve better than hypocrisy. They deserve honesty and humility in the face of this monumental change.
Michael - I get it. Especially for those of us who require kids to write, it's basically a homework completion machine and for kids who struggle to begin with, it's more than just tempting - it's impossible to resist. But my stance at this point is this is our new reality. In another few years, I think something will have to give and we will adapt as we have to every other new technology. I'm not sure what that looks like yet, but kudos to you for taking a thoughtful approach. I jumped in the deep end of the pool from the get go, so I have done just about everything with it - mostly to see what it's capable of, but I've also found a lot of really interesting use cases well beyond simply writing prompts. Regardless of your stance, we all have to be humble in the face of this technology. The short term effects in schools have done more harm than good but I don't think it has to be that way. Let's hope not. Good luck to you. You're doing work many people don't want to do anymore.
To a greater (poetry) or lesser (business report) extent, writing is a form of communication. Surely learning how to communicate, knowing why you are using choosing and using words the way you are, is a skill of great value. The problem is to persuade students of that.
All for the core message of this except the increasingly common refrain I'm seeing suggesting that "writing is thinking."
Thinking is thinking. Writing has become a type of "industrialized complex" that has people more worried about how their words will appear to an algorithm than the quality of their actual thoughts.
I think the "Writing is thinking" trope is another one of those cliched statements that has lost it's meaning to some extent - I used it as an example of how most writing instructors argue (persuasively in many cases, less so in others) that the "struggle" of writing is an important struggle, certainly to take ownership of the written product itself. I do generally agree with that.
My rubric for discerning whether to disclose the use of AI is simple: act with justice. Justice is the giving to others what is their due. In nearly every case, a student submitting an assignment for assessment or a grade owes the instructor the truth about assistance used. In far fewer cases does the instructor owe the student similar transparency. Of course there are exceptions; for example, an instructor using AI to do the assessing. This rubric is a good place to start. If the instructor sees disclosure as a way to build trust, then why not?
Precisely, great post Steve. The test now with any AI is validating sources. When use Perplexity for example. which is great with fns, a lot of the sources are meh. Often just blogs by people of unknown authority. Students need to know how to go deeper, what's valid and what isn't. For example difference b/w peer reviewed journals in hard and social sciences and other stuff, for starters
Bob - Mike Caulfield has done some really cool things as far as research and fact checking are concerned. The post at the end links to his free Deep Background GPT which does an excellent job with sources. I've also found that specifying in your prompt that you ONLY want vetted sources (for example, journals, books, published articles, etc...) Here's an example (and Mike's post at the end): Research Guidelines for Authoritative Sources Only
When researching [TOPIC], please adhere to the following source criteria:
Primary Sources Required:
Peer-reviewed academic journals and publications
Government agencies and official statistics (.gov, .edu domains)
Established research institutions and think tanks
Original reports from reputable organizations in the field
Secondary Sources (if needed):
Major news outlets with strong editorial standards (Reuters, AP, BBC, NPR, Wall Street Journal, etc.)
Expert analysis from recognized authorities in the subject area
Reports from well-established NGOs or professional associations
Exclude:
Social media posts, blogs, or opinion pieces without institutional backing
Wikipedia or user-generated content sites
Commercial websites with potential bias or promotional content
Sources without clear authorship or publication dates
Outlets known for misinformation or lacking editorial oversight
Verification Requirements:
Cross-reference claims across multiple independent sources
Prioritize recent publications (within last 3-5 years unless historical context needed)
Cite methodology for any studies or data referenced
Note any potential conflicts of interest or limitations in the sources
I think your point about Perplexity is spot on. The search for sources is the weakest link and Perplexity is the best tool for this task. I guess maybe students need to learn to bot at both ends—inputting their intentions in a coherent and structured prompt over several turns and rigorous validation of the output. Both skills will make them better writers. AI is hard work if it’s used in earnest for motivated purposes.
What is our definition of "cheating"? We need to determine that...and yes, there may be different definitions for adults and students because adults have "learned". But we have to admit that AI is here to stay, and we have to start teaching students how to use it appropriately (or in a way that is accepted)
Helping our students to develop proper AI literacy requires us to be transparent about the times when it is helpful to use AI, the times when it is unethical and the times when it is positively damaging to their learning. We often say we want our students to be more reflective about their learning; this is a perfect opportunity to put that into practice. We will miss that opportunity if we focus only on cheating.
This nails the quiet tension so many students feel but rarely say out loud: “We’re just doing what you taught us.”
Not in some abstract moral sense—but in a daily, visible way. When adults use AI, it's efficiency. When students do, it’s dishonesty. But both are using the same tool to navigate overloaded systems and high-stakes outcomes.
The question isn’t whether students should struggle through writing. The question is whether the struggle has purpose—or just weight. Because when process becomes performance, students treat it like any other task to optimize.
What if the problem isn’t that students are cheating—
but that they’re mirroring?
And what if the real discomfort is that they’ve learned from watching us?
We don’t need more restrictions. We need better conversations. Ones that treat students not as problems to police, but as co-authors of the future they’re inheriting.
Spot on.
Wow - the solution can't boil down to creating a list of what's appropriate for students to use vs. what faculty can use.
Educators in HE really need to start thinking about how they're going to integrate this technology into every class.
The issue as I see it is we still have a huge number of people who want to close their eyes to all the ways the technology is changing the student experience. One of the more alarming (depressing, disappointing, choose your word) findings in the piece I referred to is that students would rather ask AI questions about the material than a professor ... during class! Unless teachers understand that's what is happening, I don't know how they address it. The student AI use is predictably going to take off in the coming years whether people like it or not.
I worked in higher ed for nearly 20 years. There were some incredible faculty who were genuinely invested in students. And then there were a few where, honestly, I’d rather ask ChatGPT than go to them.
Higher ed needs to face this. AI is exposing issues students have dealt with for years.
I teach high school English, and I was one of the writing teachers that you described objecting to the use of AI on philosophical grounds. That said, I have decided to set aside my personal convictions in order to understand this technology that is obviously going to redefine our understanding of productivity, creativity, and knowledge. I still do not use AI for my own writing, but I did develop enough trust with my students this past year to get some transparency as to how they use it to write.
Hearing from them and thinking about my own favorite uses for AI (I am a big fan of having it create rubrics, compiling comparisons of products for shopping, and helping me brainstorm ways to differentiate assignments) I found that students that try to use AI responsibly seem to use it to either A) complete tasks that they consider intellectual drudgery B) get reteaching for concepts they didn't understand at first, and C) do silly, stupid things with it just to see how it plays (we have to remember that for a lot of kids, AI is a toy).
I'm sympathetic toward employees that use it to write soulless emails. I am sympathetic toward students that use it to write essays where teachers are more interested in their mastery of skills than the so-called precious thoughts we encourage them to generate through their commitment to the writing process.
What makes me afraid at the K-12 level is that this technology completely changes the game for teachers and students. It needs to be met with heart, passion, drive, and competence. There are still a lot of good teachers out there, but the field is still exhausted from the pandemic. In my section of the state, we are one of the only high schools with an English department where each teacher is certified. Is this the best we can offer kids in what is a true moment of crisis in education?
Students deserve better than hypocrisy. They deserve honesty and humility in the face of this monumental change.
Michael - I get it. Especially for those of us who require kids to write, it's basically a homework completion machine and for kids who struggle to begin with, it's more than just tempting - it's impossible to resist. But my stance at this point is this is our new reality. In another few years, I think something will have to give and we will adapt as we have to every other new technology. I'm not sure what that looks like yet, but kudos to you for taking a thoughtful approach. I jumped in the deep end of the pool from the get go, so I have done just about everything with it - mostly to see what it's capable of, but I've also found a lot of really interesting use cases well beyond simply writing prompts. Regardless of your stance, we all have to be humble in the face of this technology. The short term effects in schools have done more harm than good but I don't think it has to be that way. Let's hope not. Good luck to you. You're doing work many people don't want to do anymore.
To a greater (poetry) or lesser (business report) extent, writing is a form of communication. Surely learning how to communicate, knowing why you are using choosing and using words the way you are, is a skill of great value. The problem is to persuade students of that.
All for the core message of this except the increasingly common refrain I'm seeing suggesting that "writing is thinking."
Thinking is thinking. Writing has become a type of "industrialized complex" that has people more worried about how their words will appear to an algorithm than the quality of their actual thoughts.
Thinking is it's own discipline.
I think the "Writing is thinking" trope is another one of those cliched statements that has lost it's meaning to some extent - I used it as an example of how most writing instructors argue (persuasively in many cases, less so in others) that the "struggle" of writing is an important struggle, certainly to take ownership of the written product itself. I do generally agree with that.
My rubric for discerning whether to disclose the use of AI is simple: act with justice. Justice is the giving to others what is their due. In nearly every case, a student submitting an assignment for assessment or a grade owes the instructor the truth about assistance used. In far fewer cases does the instructor owe the student similar transparency. Of course there are exceptions; for example, an instructor using AI to do the assessing. This rubric is a good place to start. If the instructor sees disclosure as a way to build trust, then why not?
“…if we can't convince students that the writing process has value beyond grades” This is entirely the crux of the matter.
Decouple “thinking through writing” from grades; increase “creatively express your soul through writing”; increase multiple ways to get a grade.
That would work.
Precisely, great post Steve. The test now with any AI is validating sources. When use Perplexity for example. which is great with fns, a lot of the sources are meh. Often just blogs by people of unknown authority. Students need to know how to go deeper, what's valid and what isn't. For example difference b/w peer reviewed journals in hard and social sciences and other stuff, for starters
Bob - Mike Caulfield has done some really cool things as far as research and fact checking are concerned. The post at the end links to his free Deep Background GPT which does an excellent job with sources. I've also found that specifying in your prompt that you ONLY want vetted sources (for example, journals, books, published articles, etc...) Here's an example (and Mike's post at the end): Research Guidelines for Authoritative Sources Only
When researching [TOPIC], please adhere to the following source criteria:
Primary Sources Required:
Peer-reviewed academic journals and publications
Government agencies and official statistics (.gov, .edu domains)
Established research institutions and think tanks
Original reports from reputable organizations in the field
Secondary Sources (if needed):
Major news outlets with strong editorial standards (Reuters, AP, BBC, NPR, Wall Street Journal, etc.)
Expert analysis from recognized authorities in the subject area
Reports from well-established NGOs or professional associations
Exclude:
Social media posts, blogs, or opinion pieces without institutional backing
Wikipedia or user-generated content sites
Commercial websites with potential bias or promotional content
Sources without clear authorship or publication dates
Outlets known for misinformation or lacking editorial oversight
Verification Requirements:
Cross-reference claims across multiple independent sources
Prioritize recent publications (within last 3-5 years unless historical context needed)
Cite methodology for any studies or data referenced
Note any potential conflicts of interest or limitations in the sources
Please provide full citations for all sources used and indicate the credibility level of each source (primary research, government data, expert analysis, etc.). https://mikecaulfield.substack.com/p/deep-background-gpt-released
This is really helpful, will use shorter variation of this in my prompts in future. See, old dogs like me can learn new tricks.
I think your point about Perplexity is spot on. The search for sources is the weakest link and Perplexity is the best tool for this task. I guess maybe students need to learn to bot at both ends—inputting their intentions in a coherent and structured prompt over several turns and rigorous validation of the output. Both skills will make them better writers. AI is hard work if it’s used in earnest for motivated purposes.
This is Mike's prompt which is the one I use - I've replicated this in my own Claude Project. Great results.
https://youtu.be/FtTe5sSVVF0
https://checkplease.neocities.org/
What is our definition of "cheating"? We need to determine that...and yes, there may be different definitions for adults and students because adults have "learned". But we have to admit that AI is here to stay, and we have to start teaching students how to use it appropriately (or in a way that is accepted)
Excellent post again, Stephen.
Helping our students to develop proper AI literacy requires us to be transparent about the times when it is helpful to use AI, the times when it is unethical and the times when it is positively damaging to their learning. We often say we want our students to be more reflective about their learning; this is a perfect opportunity to put that into practice. We will miss that opportunity if we focus only on cheating.
And, yes, students are all too well aware of the hypocrisy you describe. I'm reminded of Scott McLeod's vicious cycle: https://dangerouslyirrelevant.org/2025/04/the-ai-vicious-cycle-revised.html
Yes. It's critical to create a space for conversation about using (or not using) AI. Otherwise it's all in the shadows.