The real challenge is that most faculty aren’t using this. Let’s be honest: unless you’re actually in the thick of using these tools, reading about them, testing them, experimenting, trying stuff, you really don’t know what they can do.
There are still people who genuinely don’t believe AI can write a thoughtful, empathetic email. Or that you can use it to draft an article that sounds like a real person wrote it, emotional arc and all.
What I can't shake is my scenario that higher ed's reputation will take a serious hit over the next year. Public opinion will see the student experience as vitiated by cheating.
Yes. It already has. I've had some really interesting conversations with students in the last few months and it's as divided as the faculty. 19.20, and 21 year olds rationalize things in different ways - I think even many who claim to refuse AI only mean it in the context of clear cut and paste cheating. When I dig deeper, often kids who say they "never use AI" only mean it in the ways in which they perceive it to be cheating. The data is really tough to capture.
This is the most lucid breakdown I've seen of the current AI crisis in education.
"Vacuum of leadership" is the perfect phrase for it. Students are adapting in real-time while institutions are still stuck in committees, unfortunately. Thank you for framing the problem so clearly!
Ha! Unfortunately, if it's the Times, people take it seriously. But if I am understanding you correctly, that's exactly my point. We have very few new thoughts on this issue.
I don't understand this "new thoughts" thing. Have we dispatched with the issues the piece raises? Your post is about the same subject as her essay. What are the new thoughts we should be having?
You and Hollis seem to be suggesting that you've advanced past the rest of us. The advancement of the models doesn't really change the core questions about the intersection of generative AI and schools/learning does it?
We have definitely not dispatched the issues which the piece raises. The reason I found it slightly depressing is because, to some extent, it's another version of "The End of High School English" essay written in December of 2022 and it's frustrating to me that we are still stuck here but I suppose that underscores the seriousness of the questions raised by AI with respect to writing. The new thoughts I'd like to see would expand on AI beyond simply the discussion of using it as merely a "cheating" tool and recognizing the other ways in which it is likely to shape aspects of how students, and the rest of us, interact with and produce new knowledge. These new AI browsers I've seen demo'd are a case in point.
I agree that relegating these issues to ones of academic integrity is too limiting, but as I've said ad nauseam, I also think centering AI in the conversation about what students should learn is a mistake. If AI is going to help someone produce truly "new" knowledge, as opposed to be merely a tool for a more efficient trip through school, we have to think a lot more about knowledge and learning before we get to AI.
Personally, I was advocating for "the end of high school English" as it was described in that Atlantic article long before ChatGPT showed up because that high school English wasn't engendering meaningful learning for students. The first steps were to attempt a ban. That didn't work. Next was some version of LLM assistance with schoolwork. That also didn't work.
I think we've gone around circle, but there has been something of a journey along the way. I'll tell anyone who cares to listen that I think my approach as espoused in books that came before LLMs remains superior, but for all kinds of reasons - most of them reasonable - most people seem to disagree, but each trip around the circle reveals a little more. It's progress of a kind.
I don't entirely disagree which is why I value the seriousness and thought you've given to these questions. I've been teaching long enough to recognize that because we haven't been able to address your core concerns pre-dating LLM's up to this point, I'm not optimistic we will be able to deal with them now that AI has made them more of an urgent necessity. It's not that students should learn AI so much as it's what AI has revealed about learning. Must be maddening for you I'm sure.
I've never been optimistic! I'm actually happy that LLMs have revealed things about schooling that I believed to be true pre LLM. But even with that, I never thought we would necessarily respond productively, collectively. The best I can do is say my piece and advocate for what I believe and try to help at the places where I'm invited to do so. I like having the opportunity to think through these thoughts as was the case with this piece.
Why wouldn't we take the testimony of someone as to her lived experience seriously? Are you saying she's making these things up? (E.g. Thomas Friedman reference.)
I very much agree with this: "The AI conversation has to be about pedagogy. And pedagogy requires presence, dialogue, and modeling. It requires teachers and professors who understand the tools well enough to explain them, question them, and know when to reject them."
What I'd say, though, is that this has always been true about learning, and even in the present moment, the conversation about learning cannot put AI in the driver's seat of that conversation. We must be aware of all the things you note about student use, the structural realities of school, etc...because they impact the conversation about learning, but learning has to be at the center.
It is not an impossible task to get students to write without turning to AI as a way to outsource the experience, but it does take care in designing the experience, establishing a context that makes sense to students, and then assessing both the process and the output in a way that values students and learning.
I know it's not impossible because I go and talk with folks who are doing it, or hear something I have to say and make adjustments to their approach and report back that it works. It doesn't work always or for every student, but nothing did before AI too. We can't let the presence of the technology drag us away from the kinds of experiences that lead to meaningful learning, either by retreating to blue books, or feeling like everything has to integrate AI.
I hope this will be true and hear more about the folks who are doing it at scale because, unfortunately, in my corner of the educational woods and most of the people who I talk to and hear from, the conversations are either not happening or the approach taken is not working. But I do agree that the path will be to enlist students in the conversation directly and ideally "get students to write without turning to AI". Here is my question for you (and I always appreciate your perspective): how do we position writing for students without AI when virtually every message from employers and every platform they will be using in the coming years (months, really) will have AI baked in? The point in the O'Rourke piece that really drove it home for me was how quickly she got drawn into using AI until she stopped to reflect. I'd love to know what you're saying and how that's going where people are reporting back that it works. If you would ever be interested in doing a cross-post, I'd welcome the chance to explore this more. I often reflect - What would John Warner think about this? :)
"How do we position writing for students without AI when virtually every message from employers and every platform they will be using in the coming years (months, really) will have AI baked in?"
As usual a Muti-faceted and complicated question, but here's my approach/philosophy.
1. I reject teaching writing (with the exception of specific courses like technical writing/business writing which likely will evolve to involve and perhaps require AI integration) through the lens of "this is for your future of work (or fill in the blank with anything else, e.g., get into grad school).
This creates an atmosphere of what I call in Why They Can't Write, "Indefinite future reward" which signals to students what you're doing now only has value for some future, external goal. This reinforces the kind of transactional mindset that undermines learning in general and incentivizes AI outsourcing in specific.
A writing course exists to experience what it is to write, so that's what we're going to do, write. This doesn't preclude using AI in some way, but that use comes under the umbrella of the experience of writing, as opposed to a means to create a product/output.
2. I work from the assumption that all things being equal, students would rather do something interesting and engaging, rather something rote and boring. I also assume that unless they are otherwise informed, they will assume that the writing I am asking them to do in a school context will be rote and boring. I assume this because it is what students were telling me for the last 15 years of my teaching career and was my motive to write Why They Can't Write.
I use some core principles of engagement taken from studies in learning science as a foundation from which to consider what students will do/experience.
Value – What we are doing is important and meaningful.
Competence – There is a belief that what we are doing can be done.
Relatedness – What we are doing is part of a larger community.
Autonomy – Choice and freedom increase engagement.
To your original post, transparency and mutual exchange with students is key here. I tell them, this is what we're trying to achieve, this is how the experience has been designed to achieve this. I say that I think there's a good chance that they will find themselves genuinely interested in solving the writing problem I've given them (The Writer's Practice collects those), and that if they find themselves struggling at feeling engaged to say so, so we can have a conversation about it.
We do a lot of reflection both during an after an experience and the assessment is based in the elements of the writer's practice (skills, knowledge, attitudes, habits-of-mind of writers) that the experience prioritizes. The assessment is done by students about their own learning.
I can't emphasize enough that this is not a panacea and it doesn't suddenly get every student deeply immersed in writing, but it is the first, necessary step, to invite them into a well-structured, purposeful learning experience in a way that at least makes them believe a little bit that it's worth their time to do.
School seems to pay very little attention to meeting this bar, IMO.
"I'd love to know what you're saying and how that's going where people are reporting back that it works."
We're talking about an all-day process of PD that I can't list out in full detail, but in essence it's an exercise in thinking through what learning looks like in your particularl discipline or course by asking:
"What do I want students to know?"
"What do I want students to be able to do?"
...and then using those answers to surface your deep pedagogical values, and then work from those answers and your values to design the experiences that help students achieve the answers to those questions. Since the world continues to evolve, this can become something of a moving target, but that's part of the point, prioritizing engagement and learning over proficiency and transaction requires one to continually adjust to the reality on the ground.
The evolution of my pedagogy was driven by the facts of students arriving in my FYW course with attitudes towards writing not conducive to the kind of work we needed to do. Their attitudes had been shaped by a system that prioritized rote production (e.g. 5PE's) in order to satisfy standardized metrics. I needed to challenge that attitude out of the gate. We now have AI as a method to outsource the production of these rote artifacts as an additional challenge.
We have to make it clear to students that we're not doing that. We're here to do something else: learn.
If we want to have a debate about what better prepares students for the future, I'll take my method over training them to use AI models. If students know how to think and problem solve through writing, they'll be far more adaptable to a changing world than those who are given a mere method.
I cover some examples in More Than Words, but in one case I show how my experience producing a 20+ page explication of a poem in graduate school prepared me for a career in market research better than any specific training could have. But for that to be true, students need to be aware of, and in charge of their own learning.
Completely agree with all of this and, would all teachers be teachers of writing and have the kind of time to devote to this process, I absolutely agree it could work. And I had the same debate with a colleague about training students with or without AI for the future and I 100% agree that in the formative process of learning to write, hands down I would prefer an organic method. But that is not the world in which we live in and since many, many classes that require students to write are content driven - in other words, they do not have the time or (in some cases) expertise to do the kind of work you suggest here if they are to cover their course material, AI will be a pervasive issue. That's frankly where I am stuck. I love the John Warner world and I'd like it if everyone lived there. But given that most students are looking for a degree in order to market themselves to a future employer, how do you convince them that not using AI is the answer when so many headlines are blaring the exact opposite. I like your last anecdote about producing a 20+ page explication of a poem preparing you for a career in market research, but I have a hard time believing you can convince most students of that today (though I hope I'm wrong). Will be interesting to see where all this leads.
"But that is not the world in which we live in and since many, many classes that require students to write are content driven - in other words, they do not have the time or (in some cases) expertise to do the kind of work you suggest here if they are to cover their course material, AI will be a pervasive issue. "
This is my point. We have to change. In some cases where the content is the point we should probably move away from writing as a tool of assessment. This is why the conversation starts with those questions:
"What do I want students to know?"
"What do I want students to be able to do?"
I've consulted with many people who want help changing their writing assignments to deal with AI, and through discussion have realized that what they're trying to assess isn't best done (before or after AI, really) through writing. If we want students to write, they must write. If we want to test them on content, we should test them on content, which can be, but doesn't need to be through writing, or definitely doesn't need to be done through traditional academic writing artifacts.
"I love the John Warner world and I'd like it if everyone lived there."
This is either an insult or a wilfull misreading, but I'll make my point clearer. My approach is 100% driven by a pragmatic engagement with the world as it is. I spent my entire career teaching double and sometimes triple the recommended number of students per instructor. I am intimately familiar with the structural and institutional constraints on instructors. I developed my approach - through years of iterative experimentation - in order to mitigate those those constraints. (I also spend 1/3 of Why They Can't Write identifying those constrains and advocating for changing them because they remain well...constraining.)
A wholesale adoption of the approaches of WTCW and TWP is not something I recommend, but I use them as examples and resources for instructors to apply to their specific contexts. Ultimately, all teaching is a custom job, IMO.
"But given that most students are looking for a degree in order to market themselves to a future employer, how do you convince them that not using AI is the answer when so many headlines are blaring the exact opposite."
The transactional nature of schooling, which this illustrates, is not new, so I convince them (or try to) by doing what I've long done, showing them that there is a superior way to engage for the sakes of their own learning and even, their own happiness. I show them an attractive alternative and do my best to induce them to see if they find it attractive. Those engagement principles, (value, competence, relatedness, autonomy), help quite a bit there.
"I like your last anecdote about producing a 20+ page explication of a poem preparing you for a career in market research, but I have a hard time believing you can convince most students of that today (though I hope I'm wrong)."
I don't convince them with an anecdote - though I do share that anecdote as context - I convince them through a semester long inquiry into what it means to have a writing "practice," while giving them experiences that help them build their practices, and then requiring reflection and assessment that makes students responsible for making their learning visible to themselves first, but then also capable of translating what their learning means for others. (This last part is not something my education ever gave me. I tell a story about that in More Than Words as well.)
It's not necessarily easy, but it is pretty straightforward, and in terms of actual labor, once the transition is made, it is both less work in terms of hours, and in terms of enjoyment, as part of the process is to remove things you're doing that are not consistent with your pedagogical values.
I don't pretend that we're necessarily going to win this battle against the forces aligned against learning given that they have a 30 year head start, but adopting a kind of fatalism about student attitudes or the needs of a capitalist employment market is to give up before you even start to fight.
Clearly I did not mean to insult but we do live in a capitalist society, students do go to college primary to secure a degree to get employed, and many employers are looking for students with experience in AI. Whether that means they should be learning to write with AI is a legitimate question (I don't think they all are) but I don't agree that it's "fatalism" to suggest that students have every right to question whether or not their 50k plus college degree is equipping them for their chosen field if that happens to include industries where AI is going to be required. Perhaps that's a result of where I teach, but that's what I meant. I do love these questions as a starting point for any class "What do I want students to know?"
"What do I want students to be able to do?" but I struggle with trying to convince teachers of content courses that writing may not be the best way to assess what they are teaching. That's going to be a tough sell with most teachers in the humanities.
"If that happens to include industries where AI is going to be required."
I don't advocate walling students off from generative AI. I advocate preparing them for an inevitably shifting world where they can adapt to those shifts.
We have no idea what "required" AI even looks like. The idea that we could possibly develop a curriculum that teaches specific methods for using AI that would stay current is out of touch with the nature of the technology which continues to evolve.
I also make a fairly obvious point. Just about anyone today who says they are using generative AI tools productively in their work was not using them prior to November of 2022. We should be much more interested in understanding what kinds of practices (skills, knowledge, attitudes, habits of mind) people had in place that allowed them to make these changes.
The best person I've ever seen at using AI imagine generators for specific applications didn't train as a prompt engineer, but as an advertising art director. That practice was the necessary ingredient. Experimenting without how to get the desired outputs out of the model was relatively trivial, something that could be taught in half a day.
"I struggle with trying to convince teachers of content courses that writing may not be the best way to assess what they are teaching. That's going to be a tough sell with most teachers in the humanities."
I don't try to convince anyone. I lay out a process to work through so they can figure it out what works for themselves. If they conclude that cannot get students to do what they want them to do without outsourcing it to AI then writing is clearly not a good choice for assessment in the world we live in. Now, I give them all that other stuff for how to attack the student demand problem of outsourcing, but there's many examples I've seen where the writing instructors have been asking their students to do just doesn't hold up.
In the vast majority of those cases where content is primary, what happens is designing fewer writing-related assessments, but making sure those assessments are more meaningful.
I'm not a big content guy even when I teach content-rich classes (like literature), but if someone wants to assess content, they should assess content.
"It strikes me that we may have already abdicated that decision to students in the wake of our inaction."
My only quibble here (the rest I'm pretty much lockstep with you!) is that I don't think educators are primarily to blame for this. The deliberation and discussion is happening more often now, with diverging viewpoints—even as the technology itself continues to evolve (including agentic AI that may render much of what has been "understood" before moot).
The problem? We have forfeited control as a society, including within the technologies embedded throughout our education system, to the tech companies that are incentivized not by benevolence of what is best for students and student learning—but for profit.
And they don't care at all about the consequences of their pursuit thereof.
I like to be provocative, Marcus! But from where I sit, it's almost 3 years after public LLM's have been released and, with few exceptions, it feels like we're having the same tired conversations. And I have posted many times (as have many others) that we never asked for this nor did the students. But's it's time to stop simply blaming the tech companies. Almost every data point I have - both reading and anecdotally - is that the level of resistance flies in the face of the reality of the situation. What I'm advocating for is teachers to come to terms with the fact that student AI use is simply going to be a given. If we want to be the ones setting the terms for that use, we have to engage. That's where I am currently. Ask me again tomorrow!
—released mid-school year in December 2022 (write off that year!)
—not nearly as widespread/useful in 23-24 school year
—this past year: the floodgates year with much more usage (including by educators)
Now, is our education system as a whole productively responsive to change? Of course not.
But I also think it is so much easier to get everyone to agree that we need to have a proactive path forward about AI (I think that's largely consensus now!) than to agree upon what that specific path should be. Similar to polling that may show that the vast majority believes "we don't think X politician should be the nominee" but then is splintered on the alternative, arriving at anything close to a consensus is immensely difficult.
(And again, the timeline is being set by stakeholders who have zero stakes in our students beyond profit.)
But Marcus, the writing was on the wall as early as that first week in December - remember the Atlantic piece, the End of High School English? This was a week after the release! It came out during college exams and massive cheating started happening immediately. And yet, even by mid 2024, most people I knew were barely even tinkering with AI. You say not as widespread or useful during 2023 - 2024 - maybe not in education - but the rest of the world? There was a reason ChatGPT reached 100 million users faster than any other app and the tools only continue to proliferate. That it's taken almost 3 years to reach a consensus that we need to find a productive path forward on AI is not really a reason to pat ourselves on the back in my book. I completely agree that the timeline is not being set by us but, right now, that's beyond our control. I don't anticipate consensus, but I would have hoped we would be a little further along at this point. Look at countries like South Korea, Singapore, and Japan, that have eaten our lunch us across the board in virtually every education category. How do you think they are approaching AI?
I mean, I'm the one who wrote this in December 2022: "Six reasons I am "planting my flag" that artificial intelligence's development is going to drastically change education—and that we aren't even close to prepared for it" (https://thebrokencopier.substack.com/p/yes-teachers-you-should-be-panicking)
So I'm not patting ourselves on the back, and I think it's entirely fair to say we should be further along with this conversation and working towards a solution. I've long been a fan of schools creating a "fifth core" around digital literacy, which I think would be far more effective than asking all teachers of all content to adapt constantly.
However, I also don't think there's a guarantee that speeding along at the pace of South Korea, Singapore, and Japan is going to be a net positive for students, either. As you note yourself, the data on whether this is good for students and their learning is decisively mixed.
There are potential costs to being "behind" in arriving at a path forward (whatever that ends to being) but there are also potential costs for gambling on an unknown path.
The real challenge is that most faculty aren’t using this. Let’s be honest: unless you’re actually in the thick of using these tools, reading about them, testing them, experimenting, trying stuff, you really don’t know what they can do.
There are still people who genuinely don’t believe AI can write a thoughtful, empathetic email. Or that you can use it to draft an article that sounds like a real person wrote it, emotional arc and all.
Well said. It's a shambles right now.
What I can't shake is my scenario that higher ed's reputation will take a serious hit over the next year. Public opinion will see the student experience as vitiated by cheating.
Yes. It already has. I've had some really interesting conversations with students in the last few months and it's as divided as the faculty. 19.20, and 21 year olds rationalize things in different ways - I think even many who claim to refuse AI only mean it in the context of clear cut and paste cheating. When I dig deeper, often kids who say they "never use AI" only mean it in the ways in which they perceive it to be cheating. The data is really tough to capture.
This is the most lucid breakdown I've seen of the current AI crisis in education.
"Vacuum of leadership" is the perfect phrase for it. Students are adapting in real-time while institutions are still stuck in committees, unfortunately. Thank you for framing the problem so clearly!
Wait, there are people who took that O'Rourke piece seriously? This was my take: https://substack.com/@hollisrobbins/note/c-137073658
We have to do better than this. It's embarrassing. I don't know who people think will come save them.
Ha! Unfortunately, if it's the Times, people take it seriously. But if I am understanding you correctly, that's exactly my point. We have very few new thoughts on this issue.
I don't understand this "new thoughts" thing. Have we dispatched with the issues the piece raises? Your post is about the same subject as her essay. What are the new thoughts we should be having?
You and Hollis seem to be suggesting that you've advanced past the rest of us. The advancement of the models doesn't really change the core questions about the intersection of generative AI and schools/learning does it?
We have definitely not dispatched the issues which the piece raises. The reason I found it slightly depressing is because, to some extent, it's another version of "The End of High School English" essay written in December of 2022 and it's frustrating to me that we are still stuck here but I suppose that underscores the seriousness of the questions raised by AI with respect to writing. The new thoughts I'd like to see would expand on AI beyond simply the discussion of using it as merely a "cheating" tool and recognizing the other ways in which it is likely to shape aspects of how students, and the rest of us, interact with and produce new knowledge. These new AI browsers I've seen demo'd are a case in point.
I agree that relegating these issues to ones of academic integrity is too limiting, but as I've said ad nauseam, I also think centering AI in the conversation about what students should learn is a mistake. If AI is going to help someone produce truly "new" knowledge, as opposed to be merely a tool for a more efficient trip through school, we have to think a lot more about knowledge and learning before we get to AI.
Personally, I was advocating for "the end of high school English" as it was described in that Atlantic article long before ChatGPT showed up because that high school English wasn't engendering meaningful learning for students. The first steps were to attempt a ban. That didn't work. Next was some version of LLM assistance with schoolwork. That also didn't work.
I think we've gone around circle, but there has been something of a journey along the way. I'll tell anyone who cares to listen that I think my approach as espoused in books that came before LLMs remains superior, but for all kinds of reasons - most of them reasonable - most people seem to disagree, but each trip around the circle reveals a little more. It's progress of a kind.
I don't entirely disagree which is why I value the seriousness and thought you've given to these questions. I've been teaching long enough to recognize that because we haven't been able to address your core concerns pre-dating LLM's up to this point, I'm not optimistic we will be able to deal with them now that AI has made them more of an urgent necessity. It's not that students should learn AI so much as it's what AI has revealed about learning. Must be maddening for you I'm sure.
I've never been optimistic! I'm actually happy that LLMs have revealed things about schooling that I believed to be true pre LLM. But even with that, I never thought we would necessarily respond productively, collectively. The best I can do is say my piece and advocate for what I believe and try to help at the places where I'm invited to do so. I like having the opportunity to think through these thoughts as was the case with this piece.
Why wouldn't we take the testimony of someone as to her lived experience seriously? Are you saying she's making these things up? (E.g. Thomas Friedman reference.)
I’m saying it has become a cliche for two years now! As you know well…
I don't understand. Saying students said something about ChatGPT use is a cliche?
The overhearing it is the cliche. Why shouldn’t it be a direct conversation? You know what the “overhearing” gives the piece — a sense of truth
But if she overheard it, she overheard it. I guess you're saying she's lying for effect?
I’m saying it’s a cliche!
I very much agree with this: "The AI conversation has to be about pedagogy. And pedagogy requires presence, dialogue, and modeling. It requires teachers and professors who understand the tools well enough to explain them, question them, and know when to reject them."
What I'd say, though, is that this has always been true about learning, and even in the present moment, the conversation about learning cannot put AI in the driver's seat of that conversation. We must be aware of all the things you note about student use, the structural realities of school, etc...because they impact the conversation about learning, but learning has to be at the center.
It is not an impossible task to get students to write without turning to AI as a way to outsource the experience, but it does take care in designing the experience, establishing a context that makes sense to students, and then assessing both the process and the output in a way that values students and learning.
I know it's not impossible because I go and talk with folks who are doing it, or hear something I have to say and make adjustments to their approach and report back that it works. It doesn't work always or for every student, but nothing did before AI too. We can't let the presence of the technology drag us away from the kinds of experiences that lead to meaningful learning, either by retreating to blue books, or feeling like everything has to integrate AI.
I hope this will be true and hear more about the folks who are doing it at scale because, unfortunately, in my corner of the educational woods and most of the people who I talk to and hear from, the conversations are either not happening or the approach taken is not working. But I do agree that the path will be to enlist students in the conversation directly and ideally "get students to write without turning to AI". Here is my question for you (and I always appreciate your perspective): how do we position writing for students without AI when virtually every message from employers and every platform they will be using in the coming years (months, really) will have AI baked in? The point in the O'Rourke piece that really drove it home for me was how quickly she got drawn into using AI until she stopped to reflect. I'd love to know what you're saying and how that's going where people are reporting back that it works. If you would ever be interested in doing a cross-post, I'd welcome the chance to explore this more. I often reflect - What would John Warner think about this? :)
"How do we position writing for students without AI when virtually every message from employers and every platform they will be using in the coming years (months, really) will have AI baked in?"
As usual a Muti-faceted and complicated question, but here's my approach/philosophy.
1. I reject teaching writing (with the exception of specific courses like technical writing/business writing which likely will evolve to involve and perhaps require AI integration) through the lens of "this is for your future of work (or fill in the blank with anything else, e.g., get into grad school).
This creates an atmosphere of what I call in Why They Can't Write, "Indefinite future reward" which signals to students what you're doing now only has value for some future, external goal. This reinforces the kind of transactional mindset that undermines learning in general and incentivizes AI outsourcing in specific.
A writing course exists to experience what it is to write, so that's what we're going to do, write. This doesn't preclude using AI in some way, but that use comes under the umbrella of the experience of writing, as opposed to a means to create a product/output.
2. I work from the assumption that all things being equal, students would rather do something interesting and engaging, rather something rote and boring. I also assume that unless they are otherwise informed, they will assume that the writing I am asking them to do in a school context will be rote and boring. I assume this because it is what students were telling me for the last 15 years of my teaching career and was my motive to write Why They Can't Write.
I use some core principles of engagement taken from studies in learning science as a foundation from which to consider what students will do/experience.
Value – What we are doing is important and meaningful.
Competence – There is a belief that what we are doing can be done.
Relatedness – What we are doing is part of a larger community.
Autonomy – Choice and freedom increase engagement.
To your original post, transparency and mutual exchange with students is key here. I tell them, this is what we're trying to achieve, this is how the experience has been designed to achieve this. I say that I think there's a good chance that they will find themselves genuinely interested in solving the writing problem I've given them (The Writer's Practice collects those), and that if they find themselves struggling at feeling engaged to say so, so we can have a conversation about it.
We do a lot of reflection both during an after an experience and the assessment is based in the elements of the writer's practice (skills, knowledge, attitudes, habits-of-mind of writers) that the experience prioritizes. The assessment is done by students about their own learning.
I can't emphasize enough that this is not a panacea and it doesn't suddenly get every student deeply immersed in writing, but it is the first, necessary step, to invite them into a well-structured, purposeful learning experience in a way that at least makes them believe a little bit that it's worth their time to do.
School seems to pay very little attention to meeting this bar, IMO.
"I'd love to know what you're saying and how that's going where people are reporting back that it works."
We're talking about an all-day process of PD that I can't list out in full detail, but in essence it's an exercise in thinking through what learning looks like in your particularl discipline or course by asking:
"What do I want students to know?"
"What do I want students to be able to do?"
...and then using those answers to surface your deep pedagogical values, and then work from those answers and your values to design the experiences that help students achieve the answers to those questions. Since the world continues to evolve, this can become something of a moving target, but that's part of the point, prioritizing engagement and learning over proficiency and transaction requires one to continually adjust to the reality on the ground.
The evolution of my pedagogy was driven by the facts of students arriving in my FYW course with attitudes towards writing not conducive to the kind of work we needed to do. Their attitudes had been shaped by a system that prioritized rote production (e.g. 5PE's) in order to satisfy standardized metrics. I needed to challenge that attitude out of the gate. We now have AI as a method to outsource the production of these rote artifacts as an additional challenge.
We have to make it clear to students that we're not doing that. We're here to do something else: learn.
If we want to have a debate about what better prepares students for the future, I'll take my method over training them to use AI models. If students know how to think and problem solve through writing, they'll be far more adaptable to a changing world than those who are given a mere method.
I cover some examples in More Than Words, but in one case I show how my experience producing a 20+ page explication of a poem in graduate school prepared me for a career in market research better than any specific training could have. But for that to be true, students need to be aware of, and in charge of their own learning.
Completely agree with all of this and, would all teachers be teachers of writing and have the kind of time to devote to this process, I absolutely agree it could work. And I had the same debate with a colleague about training students with or without AI for the future and I 100% agree that in the formative process of learning to write, hands down I would prefer an organic method. But that is not the world in which we live in and since many, many classes that require students to write are content driven - in other words, they do not have the time or (in some cases) expertise to do the kind of work you suggest here if they are to cover their course material, AI will be a pervasive issue. That's frankly where I am stuck. I love the John Warner world and I'd like it if everyone lived there. But given that most students are looking for a degree in order to market themselves to a future employer, how do you convince them that not using AI is the answer when so many headlines are blaring the exact opposite. I like your last anecdote about producing a 20+ page explication of a poem preparing you for a career in market research, but I have a hard time believing you can convince most students of that today (though I hope I'm wrong). Will be interesting to see where all this leads.
"But that is not the world in which we live in and since many, many classes that require students to write are content driven - in other words, they do not have the time or (in some cases) expertise to do the kind of work you suggest here if they are to cover their course material, AI will be a pervasive issue. "
This is my point. We have to change. In some cases where the content is the point we should probably move away from writing as a tool of assessment. This is why the conversation starts with those questions:
"What do I want students to know?"
"What do I want students to be able to do?"
I've consulted with many people who want help changing their writing assignments to deal with AI, and through discussion have realized that what they're trying to assess isn't best done (before or after AI, really) through writing. If we want students to write, they must write. If we want to test them on content, we should test them on content, which can be, but doesn't need to be through writing, or definitely doesn't need to be done through traditional academic writing artifacts.
"I love the John Warner world and I'd like it if everyone lived there."
This is either an insult or a wilfull misreading, but I'll make my point clearer. My approach is 100% driven by a pragmatic engagement with the world as it is. I spent my entire career teaching double and sometimes triple the recommended number of students per instructor. I am intimately familiar with the structural and institutional constraints on instructors. I developed my approach - through years of iterative experimentation - in order to mitigate those those constraints. (I also spend 1/3 of Why They Can't Write identifying those constrains and advocating for changing them because they remain well...constraining.)
A wholesale adoption of the approaches of WTCW and TWP is not something I recommend, but I use them as examples and resources for instructors to apply to their specific contexts. Ultimately, all teaching is a custom job, IMO.
"But given that most students are looking for a degree in order to market themselves to a future employer, how do you convince them that not using AI is the answer when so many headlines are blaring the exact opposite."
The transactional nature of schooling, which this illustrates, is not new, so I convince them (or try to) by doing what I've long done, showing them that there is a superior way to engage for the sakes of their own learning and even, their own happiness. I show them an attractive alternative and do my best to induce them to see if they find it attractive. Those engagement principles, (value, competence, relatedness, autonomy), help quite a bit there.
"I like your last anecdote about producing a 20+ page explication of a poem preparing you for a career in market research, but I have a hard time believing you can convince most students of that today (though I hope I'm wrong)."
I don't convince them with an anecdote - though I do share that anecdote as context - I convince them through a semester long inquiry into what it means to have a writing "practice," while giving them experiences that help them build their practices, and then requiring reflection and assessment that makes students responsible for making their learning visible to themselves first, but then also capable of translating what their learning means for others. (This last part is not something my education ever gave me. I tell a story about that in More Than Words as well.)
It's not necessarily easy, but it is pretty straightforward, and in terms of actual labor, once the transition is made, it is both less work in terms of hours, and in terms of enjoyment, as part of the process is to remove things you're doing that are not consistent with your pedagogical values.
I don't pretend that we're necessarily going to win this battle against the forces aligned against learning given that they have a 30 year head start, but adopting a kind of fatalism about student attitudes or the needs of a capitalist employment market is to give up before you even start to fight.
Clearly I did not mean to insult but we do live in a capitalist society, students do go to college primary to secure a degree to get employed, and many employers are looking for students with experience in AI. Whether that means they should be learning to write with AI is a legitimate question (I don't think they all are) but I don't agree that it's "fatalism" to suggest that students have every right to question whether or not their 50k plus college degree is equipping them for their chosen field if that happens to include industries where AI is going to be required. Perhaps that's a result of where I teach, but that's what I meant. I do love these questions as a starting point for any class "What do I want students to know?"
"What do I want students to be able to do?" but I struggle with trying to convince teachers of content courses that writing may not be the best way to assess what they are teaching. That's going to be a tough sell with most teachers in the humanities.
"If that happens to include industries where AI is going to be required."
I don't advocate walling students off from generative AI. I advocate preparing them for an inevitably shifting world where they can adapt to those shifts.
We have no idea what "required" AI even looks like. The idea that we could possibly develop a curriculum that teaches specific methods for using AI that would stay current is out of touch with the nature of the technology which continues to evolve.
I also make a fairly obvious point. Just about anyone today who says they are using generative AI tools productively in their work was not using them prior to November of 2022. We should be much more interested in understanding what kinds of practices (skills, knowledge, attitudes, habits of mind) people had in place that allowed them to make these changes.
The best person I've ever seen at using AI imagine generators for specific applications didn't train as a prompt engineer, but as an advertising art director. That practice was the necessary ingredient. Experimenting without how to get the desired outputs out of the model was relatively trivial, something that could be taught in half a day.
"I struggle with trying to convince teachers of content courses that writing may not be the best way to assess what they are teaching. That's going to be a tough sell with most teachers in the humanities."
I don't try to convince anyone. I lay out a process to work through so they can figure it out what works for themselves. If they conclude that cannot get students to do what they want them to do without outsourcing it to AI then writing is clearly not a good choice for assessment in the world we live in. Now, I give them all that other stuff for how to attack the student demand problem of outsourcing, but there's many examples I've seen where the writing instructors have been asking their students to do just doesn't hold up.
In the vast majority of those cases where content is primary, what happens is designing fewer writing-related assessments, but making sure those assessments are more meaningful.
I'm not a big content guy even when I teach content-rich classes (like literature), but if someone wants to assess content, they should assess content.
"It strikes me that we may have already abdicated that decision to students in the wake of our inaction."
My only quibble here (the rest I'm pretty much lockstep with you!) is that I don't think educators are primarily to blame for this. The deliberation and discussion is happening more often now, with diverging viewpoints—even as the technology itself continues to evolve (including agentic AI that may render much of what has been "understood" before moot).
The problem? We have forfeited control as a society, including within the technologies embedded throughout our education system, to the tech companies that are incentivized not by benevolence of what is best for students and student learning—but for profit.
And they don't care at all about the consequences of their pursuit thereof.
I like to be provocative, Marcus! But from where I sit, it's almost 3 years after public LLM's have been released and, with few exceptions, it feels like we're having the same tired conversations. And I have posted many times (as have many others) that we never asked for this nor did the students. But's it's time to stop simply blaming the tech companies. Almost every data point I have - both reading and anecdotally - is that the level of resistance flies in the face of the reality of the situation. What I'm advocating for is teachers to come to terms with the fact that student AI use is simply going to be a given. If we want to be the ones setting the terms for that use, we have to engage. That's where I am currently. Ask me again tomorrow!
My recollection of the timeline:
—released mid-school year in December 2022 (write off that year!)
—not nearly as widespread/useful in 23-24 school year
—this past year: the floodgates year with much more usage (including by educators)
Now, is our education system as a whole productively responsive to change? Of course not.
But I also think it is so much easier to get everyone to agree that we need to have a proactive path forward about AI (I think that's largely consensus now!) than to agree upon what that specific path should be. Similar to polling that may show that the vast majority believes "we don't think X politician should be the nominee" but then is splintered on the alternative, arriving at anything close to a consensus is immensely difficult.
(And again, the timeline is being set by stakeholders who have zero stakes in our students beyond profit.)
But Marcus, the writing was on the wall as early as that first week in December - remember the Atlantic piece, the End of High School English? This was a week after the release! It came out during college exams and massive cheating started happening immediately. And yet, even by mid 2024, most people I knew were barely even tinkering with AI. You say not as widespread or useful during 2023 - 2024 - maybe not in education - but the rest of the world? There was a reason ChatGPT reached 100 million users faster than any other app and the tools only continue to proliferate. That it's taken almost 3 years to reach a consensus that we need to find a productive path forward on AI is not really a reason to pat ourselves on the back in my book. I completely agree that the timeline is not being set by us but, right now, that's beyond our control. I don't anticipate consensus, but I would have hoped we would be a little further along at this point. Look at countries like South Korea, Singapore, and Japan, that have eaten our lunch us across the board in virtually every education category. How do you think they are approaching AI?
I mean, I'm the one who wrote this in December 2022: "Six reasons I am "planting my flag" that artificial intelligence's development is going to drastically change education—and that we aren't even close to prepared for it" (https://thebrokencopier.substack.com/p/yes-teachers-you-should-be-panicking)
So I'm not patting ourselves on the back, and I think it's entirely fair to say we should be further along with this conversation and working towards a solution. I've long been a fan of schools creating a "fifth core" around digital literacy, which I think would be far more effective than asking all teachers of all content to adapt constantly.
However, I also don't think there's a guarantee that speeding along at the pace of South Korea, Singapore, and Japan is going to be a net positive for students, either. As you note yourself, the data on whether this is good for students and their learning is decisively mixed.
There are potential costs to being "behind" in arriving at a path forward (whatever that ends to being) but there are also potential costs for gambling on an unknown path.