Thanks! I've been following the back and forth discussion about student use of AI for the past two years, and, candidly, it's still unclear to me where education as a whole is going to land (not larger society - I have one foot in the business / tech universe and many can't even believe we're still having this debate - they've moved on), but I do think walking this tightrope between trying to keep students away from using AI to understanding this technology in real time together is going to have to be bridged at some point. But the assessment piece is driving the conversation more than anyone is willing to admit. Where else but in high school and college do you get "graded" on your written work with a single letter? You get "graded" on your writing in real life based on whether you can persuade, inform, startle, surprise, or otherwise make an impact beyond yourself with words. That requires high quality ideas and clear communication. If AI can help you achieve these two things, it seems odd to me that it should be devalued. Your "grade" at work is more often dependent on your overall performance, which includes all the soft skills AI cannot replicate - empathy, collegiality, integrity, perseverance, respect, and on and on. But I appreciate the support and will check out your resources. Keep on keeping on.
Stephen, I don’t understand the purpose of your “blue book” assignment. Showing students that you “know about AI”? I think a lot of students also would be very upset that you the teacher used AI to evaluate them. I’m not sure I’m following what the purpose of this assignment is exactly. I’m searching for how to deal with AI in my college writing classes in English classes, and I would like to know more about your thinking here.
Great question. Let me see if I can explain my thinking - the idea is that you want to have some kind of writing fingerprint for each student. At a bare minimum, you will have an original writing sample (which in many cases might be enough if the disparity between subsequent submissions and the original is stark), but the purpose of the GPT I created is to have AI scan the document not to "evaluate" them, but to provide a baseline about their writing features - this is something AI does very well. If you look at the sample prompt instructions (which can easily be refined or tweaked), what you get is an analysis of a piece of writing (sentence length, complexity, vocabulary, grammatical tendencies, rhetorical patterns - basically, an overview of mechanisms used in their writing, It's not perfect - you want at least 500 words - but the final section of the analysis includes "authentication markers" which are aspects of writing you can be on the lookout for in future submissions. I don't like AI detectors and they are not reliable, but if you were to do this, essentially you would have an individual "detector" for each student. Ideally, you would not have to use it, but if you got what you thought was an obvious piece of AI writing, you could compare it to the original and the authentication markers and at least have some kind of basis for which to speak with the student. Being transparent up front I think is essential which is why you should share the documents with them. That may be a useful exercise in itself. And the bonus is you've demonstrated some facility with AI. As for students being upset, is your school using other means to attempt to ferret out student AI writing? Process tracking tools? GPT-Zero? Turnitin? Students are certainly aware this is an issue and their writing may be analyzed. Proving AI use is extremely difficult - this gives you a little bit of a leg to stand on when having that initial conversation. And, to be clear, it does not have to be a blue book - the key is you need a baseline piece of writing you can be sure is 100% authentic. If you have something like lockdown browser or you are confident the initial exercise will be done AI-free, have them use a Google Doc which makes it easier to run through the custom GPT. If not, GPT is very good at reading scanned material. I think it can work and I did something similar at the start of last year but will do it much more transparently and intentionally this fall. We'll see. Everyone is in experimental mode. I hope that helps. Thanks for the question.
Could that feedback then result in some sort of "ChatGPT arms race," where the student then provides that feedback alongside their next prompt and asks AI to use the notes on voice, complexity, and clairty to calibrate their next essay? In other words, can't AI use the feedback as a way to sidestep the same trap you've laid out for them?
If a student wants to use AI to avoid doing any work, they will find a way to use AI. It would take a fairly sophisticated prompt to do that, but it's certainly possible. I prefer not to see it as a trap but more of a way to create a baseline. But yes, I thought of the same thing.
Oh, okay. I get the part about having their baseline writing, but can’t the instructor just use that piece of writing? Without feeding a student’s work into ChatGPT? Wouldn’t the teacher’s own expertise as a writing teacher be able to point out those kinds of characteristics in the students writing? I really appreciate your response and I guess I’m just trying to get my head around this. I’m not sure I’m comfortable putting someone’s writing into ChatGPT, especially if I didn’t ask them if I could do that. Thanks so much! I just discovered your Substack and I’m excited to follow.
You definitely can. And yes, your teacher's expertise will certainly work for most cases. But what AI is especially good at is pattern recognition. A general baseline is helpful but it's also an interesting exercise running your own writing through it. If you already use any kind of detection software, you're putting students work into an AI interface.
I think this sort of intervention is needed, but the execution in this essay is lacking. While “flipping the script” on using AI for student blue books is innovative, it is also tedious and not scalable long-term. The student might themselves scan their work and then tell an AI agent to gradually progress the AI version of their work as the semester goes based on feedback. AI would essentially be talking to AI all semester, and no humans would be involved.
No, educators need to do two things: go offline, with paper books and paper blue books, for reading and reflection. Excessive note-taking, dog-earing, highlighting and writing in the margins is graded highly. Rip the books to shreds, fill the blue books in class. Read in class, write in class, grade each other’s work, feedback immediate.
Secondly, busywork needs to be rethought. Is the point of an assignment pointless “reflection?” Can it. Nobody cares. Transform the assignment. If it’s a video, have the students write a response video script, rebutting points or providing context. Grade the process, not necessarily the product. Ask yourself as an educator, “Will this assignment cause the student to shift their thinking or create something novel, or do I just want to make sure the student has done what I asked?” Bin all the BS assignments. Reading reflections, response to two peers, opinion essays, 5 point paragraphs, most essays entirely are just canned boring check-boxes anyway.
The last time a student impressed me with a reading response was when they refused to do the prompt and instead told me why I was a bad educator for choosing the video. It made me realize I had a thinker submitting assignments, who cared (maybe wrongly, maybe too much) about the class material. I thanked the student and engaged with them, and they ended up really enjoying writing difficult contrarian pieces. They got an A in my class, even though we were ideologically different. I gave them a chance to speak up and they took it. We as educators need to give our students a voice, not a check box.
Thanks for the comment. I wholeheartedly agree that many current writing assignments need to be rethought, though that's been the case before the advent of genAI and, in most schools, if anything, they have proliferated. This suggestion is primarily for those teachers who have seen enormous use of Ai writing in their classes and want to have some kind of defense. For scalability, the idea is you only need to do it once, but I take your point - it's not something that can be done by everyone or for those with heavy course loads or a large numbers of students. Off-loading all writing to the classroom may combat the immediate problem but is simply not realistic for the way in which modern education is structured and most classes cannot only do in-class writing simply due to logistics and time constraints. Long term, blue books and in-class everything may not be the magic bullet anyway, with AI wearables already available and only getting more sophisticated in the coming years. As for your point that students can use the AI version of their work and instruct it to gradually progress, I thought of the same thing (nice catch!) - my best response is that, any student who is that determined and skilled to use AI to do that, is not going to be deterred by much. But even simply having a baseline writing sample at the beginning of an academic year, regardless of an AI analysis, is an improvement. But your points are all good ones and your last point is the key - empowering students and convincing them that we value their voice is the ultimate solution. My feeling is that grading is the real problem and that's an entirely different issue though focusing on the process is a great first step. Appreciate the time taken to read my post.
Definitely agree that the fundamental problem is the assignment -> grade transaction model of education now. The funny thing is that most vocational education doesn’t do that—there’s certification, but in performing a skill. Want to get forklift certified? There’s the forklift, do the task for X number of hours and show you can do the task. Certification in the mail. Why should liberal arts education be any different?
There was a time when we did all our exams openly, publicly, as a debate or a presentation. Show you know your stuff and that you can rebut criticism or alter your original ideas. If so, degree achieved. Maybe you pass with distinction, or you pass with emendations, but you pass. Otherwise, try again next term. I know that such a public oral exam isn’t likely to be scalable at all, but it is a tried and tested method. It’s also what we use to grant phds and master’s degrees anyway.
My concern is more with writing and literature, disciplines that are sorely lacking in support. Everybody wants to shortcut the SKILL of reading, of critiquing, of coming up with critical arguments and justifying them. It’s like woodworking; I can explain ideas to you until I’m blue in the face, but you can’t learn how to explain ideas back to me just by passively listening. You have to do the work.
In my classes, my best successes are when I throw out the busy work, throw out the lesson where I talk 90% and then pull teeth to get students to talk 10%. I give the students a passage and tell them to rewrite it in their own words. Then rewrite it again as if opposing them in a debate. Then rewrite it again while incorporating those harsh critiques and arguments. Each time, we get deeper in the weeds when students read out their versions, picking up an idea that others run with. By the end, we’ve torn the idea to shreds, learned a lot. We haven’t graded a thing. It’s the best kind of learning to me. That’s what I want learning to be for everyone. Doing, not listening or regurgitating.
Thanks for sharing your thoughts—these are exactly the kinds of conversations we need right now as AI becomes a daily part of both responding to instruction and something much deeper -- actual learning. I’ve been seeing a lot of energy spent on trying to “catch” students using AI to get a better grade, but to me, that’s missing the bigger opportunity. Instead of policing whether students are using AI, I’d rather ask how they’re using it and what they’re actually learning in the process. And one of the most effective ways to assess that is to take time to actually ask them to tell you -- not on paper, but in a sit-down conversation.
In my own work, I’ve started thinking about student use of AI in terms of proficiency, kind of like the ACTFL proficiency scale we use in language learning. At the most basic level, a student might just copy and paste what AI gives them, which isn’t much different from copying from the back of the book. No credit for that "novice low" level of proficiency. But as students get more skilled, they start using AI to brainstorm ideas, ask deeper questions, check sources, and actually create or revise their own work. The highest level I’ve seen is when students are using AI almost like a partner—testing out ideas, comparing answers, and pushing themselves to think in new ways. That’s where the real learning happens. In fact, that's how I used it for this response.
The real shift here is about assessing student learning—not just measuring how well students respond to our instruction. In the long run, that’s so much more important. If we want students to be ready for the world beyond school, we have to help them learn how to use tools like AI thoughtfully and reflectively, not just jump through hoops for a grade.
When it comes to assessment, I'm all about proficiency, rather than grading any "product". Along with that, since we know they will (and should be) using AI as a learning tool, I’d like to see students show their work—share their prompts, reflect on how AI shaped their thinking, point out what they had to change or question, and explain where they brought their own judgment into the process -- all as part of that conversation I mentioned earlier.
We need to move beyond the factory-based mental model of schooling as akin to an assembly line. That "age" is fading fast and we are emerging full speed into the AI-Age. I recently wrote a Substack post about making sure that what goes on in school is truly “age appropriate”. If you’re interested, you can find it in my Substack posts. here: Age Inappropriate? Time to Light Some Candles. I also had an earlier post about a Proficiency-based AI guide that you can find in my posts.
Thanks for opening up this space to think differently about teaching in the age of AI. I’m eager to keep learning from others in this conversation. By the way, I (of course) used my usual AI iterative process for this reply as I alluded to above.
I love these thoughts, especially the writing fingerprinting exercise. This is full of great ideas. The only thing I might press on a little is the assertion that "As long as an assignment involves gathering information, turning it into sentences, and presenting it for evaluation, AI can replace human work." I'd say that since AIs don't synthesize text with accuracy as a goal, they can't really REPLACE human work (unless the human in question doesn't care in the least about correctness and randomly lies). No matter how many guardrails the vendors try to erect, I'm not persuaded the machines can be kept from inventing information. So it's a replacement in a way, but not a good one :-)
Thanks, Steve. I like a careful reader! And here would be my (tongue firmly in cheek though not really) response - if the human work is done in the prompt, the AI can absolutely deliver! I've seen it time and time again and am more and more amazed at what some of these tools can do. I have a Manus account and it's WAY more than just producing text - a fairly short prompt can deliver some impressive results involving a whole sequence of steps ... https://manus.im/home Just click on any one of the use cases... This is where things are headed. But you know this.
I actually wasn't aware of Manus! But I am now. Among my many concerns about LLMs is that their ubiquity will reveal or already has revealed essential hollowness across many areas of society. If students readily use AI to write essays for them, it shows they don't actually value learning to write. If potentially shallow or unreliable synthetic text is an acceptable substitute for human-made text in a variety of use cases, that suggests that depth and accuracy were never the primary goals of those use cases to begin with. Perhaps the goals were always primarily social or transactional, and the text is a kind of currency whose specific content is less than critical. Very raw ideas, a set of thoughts I'm still developing. Glad to find your newsletter!
I agree. It's enormously helpful and reassuring to find people on Substack (like yourself) who are experimenting, reflecting and thinking openly about what happens next.
Really honest, thoughtful post, Stephen. I agree completely with your point about intentionality -
about being clear and transparent (with ourselves and our students) about what we want to achieve from any given assignment. And, as you make clear, those aims will likely have to change in a world with AI - by which I mean, now! While the necessary experimentation is beginning to happen (albeit in relatively few classrooms) there will be an increasing mismatch between what we discover to be effective in the classroom and what happens in public exams (for which we also have to prepare our students).
Yes. And there has to be room for experimentation which inevitably means failure and unintended consequences. A fear I have is teachers may, based on a single bad experience, conclude that AI is then to be avoided at all costs. Substack has been an useful resource to connect with educators who are thinking about these issues more deeply than the kneejerk "reject" response which (for some understandable reasons) is the easy default. It's been useful to discover like-minded folks navigating this path together!
Thanks! I've been following the back and forth discussion about student use of AI for the past two years, and, candidly, it's still unclear to me where education as a whole is going to land (not larger society - I have one foot in the business / tech universe and many can't even believe we're still having this debate - they've moved on), but I do think walking this tightrope between trying to keep students away from using AI to understanding this technology in real time together is going to have to be bridged at some point. But the assessment piece is driving the conversation more than anyone is willing to admit. Where else but in high school and college do you get "graded" on your written work with a single letter? You get "graded" on your writing in real life based on whether you can persuade, inform, startle, surprise, or otherwise make an impact beyond yourself with words. That requires high quality ideas and clear communication. If AI can help you achieve these two things, it seems odd to me that it should be devalued. Your "grade" at work is more often dependent on your overall performance, which includes all the soft skills AI cannot replicate - empathy, collegiality, integrity, perseverance, respect, and on and on. But I appreciate the support and will check out your resources. Keep on keeping on.
Stephen, I don’t understand the purpose of your “blue book” assignment. Showing students that you “know about AI”? I think a lot of students also would be very upset that you the teacher used AI to evaluate them. I’m not sure I’m following what the purpose of this assignment is exactly. I’m searching for how to deal with AI in my college writing classes in English classes, and I would like to know more about your thinking here.
Great question. Let me see if I can explain my thinking - the idea is that you want to have some kind of writing fingerprint for each student. At a bare minimum, you will have an original writing sample (which in many cases might be enough if the disparity between subsequent submissions and the original is stark), but the purpose of the GPT I created is to have AI scan the document not to "evaluate" them, but to provide a baseline about their writing features - this is something AI does very well. If you look at the sample prompt instructions (which can easily be refined or tweaked), what you get is an analysis of a piece of writing (sentence length, complexity, vocabulary, grammatical tendencies, rhetorical patterns - basically, an overview of mechanisms used in their writing, It's not perfect - you want at least 500 words - but the final section of the analysis includes "authentication markers" which are aspects of writing you can be on the lookout for in future submissions. I don't like AI detectors and they are not reliable, but if you were to do this, essentially you would have an individual "detector" for each student. Ideally, you would not have to use it, but if you got what you thought was an obvious piece of AI writing, you could compare it to the original and the authentication markers and at least have some kind of basis for which to speak with the student. Being transparent up front I think is essential which is why you should share the documents with them. That may be a useful exercise in itself. And the bonus is you've demonstrated some facility with AI. As for students being upset, is your school using other means to attempt to ferret out student AI writing? Process tracking tools? GPT-Zero? Turnitin? Students are certainly aware this is an issue and their writing may be analyzed. Proving AI use is extremely difficult - this gives you a little bit of a leg to stand on when having that initial conversation. And, to be clear, it does not have to be a blue book - the key is you need a baseline piece of writing you can be sure is 100% authentic. If you have something like lockdown browser or you are confident the initial exercise will be done AI-free, have them use a Google Doc which makes it easier to run through the custom GPT. If not, GPT is very good at reading scanned material. I think it can work and I did something similar at the start of last year but will do it much more transparently and intentionally this fall. We'll see. Everyone is in experimental mode. I hope that helps. Thanks for the question.
Could that feedback then result in some sort of "ChatGPT arms race," where the student then provides that feedback alongside their next prompt and asks AI to use the notes on voice, complexity, and clairty to calibrate their next essay? In other words, can't AI use the feedback as a way to sidestep the same trap you've laid out for them?
If a student wants to use AI to avoid doing any work, they will find a way to use AI. It would take a fairly sophisticated prompt to do that, but it's certainly possible. I prefer not to see it as a trap but more of a way to create a baseline. But yes, I thought of the same thing.
Oh, okay. I get the part about having their baseline writing, but can’t the instructor just use that piece of writing? Without feeding a student’s work into ChatGPT? Wouldn’t the teacher’s own expertise as a writing teacher be able to point out those kinds of characteristics in the students writing? I really appreciate your response and I guess I’m just trying to get my head around this. I’m not sure I’m comfortable putting someone’s writing into ChatGPT, especially if I didn’t ask them if I could do that. Thanks so much! I just discovered your Substack and I’m excited to follow.
You definitely can. And yes, your teacher's expertise will certainly work for most cases. But what AI is especially good at is pattern recognition. A general baseline is helpful but it's also an interesting exercise running your own writing through it. If you already use any kind of detection software, you're putting students work into an AI interface.
Okay. Thank you.
I think this sort of intervention is needed, but the execution in this essay is lacking. While “flipping the script” on using AI for student blue books is innovative, it is also tedious and not scalable long-term. The student might themselves scan their work and then tell an AI agent to gradually progress the AI version of their work as the semester goes based on feedback. AI would essentially be talking to AI all semester, and no humans would be involved.
No, educators need to do two things: go offline, with paper books and paper blue books, for reading and reflection. Excessive note-taking, dog-earing, highlighting and writing in the margins is graded highly. Rip the books to shreds, fill the blue books in class. Read in class, write in class, grade each other’s work, feedback immediate.
Secondly, busywork needs to be rethought. Is the point of an assignment pointless “reflection?” Can it. Nobody cares. Transform the assignment. If it’s a video, have the students write a response video script, rebutting points or providing context. Grade the process, not necessarily the product. Ask yourself as an educator, “Will this assignment cause the student to shift their thinking or create something novel, or do I just want to make sure the student has done what I asked?” Bin all the BS assignments. Reading reflections, response to two peers, opinion essays, 5 point paragraphs, most essays entirely are just canned boring check-boxes anyway.
The last time a student impressed me with a reading response was when they refused to do the prompt and instead told me why I was a bad educator for choosing the video. It made me realize I had a thinker submitting assignments, who cared (maybe wrongly, maybe too much) about the class material. I thanked the student and engaged with them, and they ended up really enjoying writing difficult contrarian pieces. They got an A in my class, even though we were ideologically different. I gave them a chance to speak up and they took it. We as educators need to give our students a voice, not a check box.
Thanks for the comment. I wholeheartedly agree that many current writing assignments need to be rethought, though that's been the case before the advent of genAI and, in most schools, if anything, they have proliferated. This suggestion is primarily for those teachers who have seen enormous use of Ai writing in their classes and want to have some kind of defense. For scalability, the idea is you only need to do it once, but I take your point - it's not something that can be done by everyone or for those with heavy course loads or a large numbers of students. Off-loading all writing to the classroom may combat the immediate problem but is simply not realistic for the way in which modern education is structured and most classes cannot only do in-class writing simply due to logistics and time constraints. Long term, blue books and in-class everything may not be the magic bullet anyway, with AI wearables already available and only getting more sophisticated in the coming years. As for your point that students can use the AI version of their work and instruct it to gradually progress, I thought of the same thing (nice catch!) - my best response is that, any student who is that determined and skilled to use AI to do that, is not going to be deterred by much. But even simply having a baseline writing sample at the beginning of an academic year, regardless of an AI analysis, is an improvement. But your points are all good ones and your last point is the key - empowering students and convincing them that we value their voice is the ultimate solution. My feeling is that grading is the real problem and that's an entirely different issue though focusing on the process is a great first step. Appreciate the time taken to read my post.
Definitely agree that the fundamental problem is the assignment -> grade transaction model of education now. The funny thing is that most vocational education doesn’t do that—there’s certification, but in performing a skill. Want to get forklift certified? There’s the forklift, do the task for X number of hours and show you can do the task. Certification in the mail. Why should liberal arts education be any different?
There was a time when we did all our exams openly, publicly, as a debate or a presentation. Show you know your stuff and that you can rebut criticism or alter your original ideas. If so, degree achieved. Maybe you pass with distinction, or you pass with emendations, but you pass. Otherwise, try again next term. I know that such a public oral exam isn’t likely to be scalable at all, but it is a tried and tested method. It’s also what we use to grant phds and master’s degrees anyway.
My concern is more with writing and literature, disciplines that are sorely lacking in support. Everybody wants to shortcut the SKILL of reading, of critiquing, of coming up with critical arguments and justifying them. It’s like woodworking; I can explain ideas to you until I’m blue in the face, but you can’t learn how to explain ideas back to me just by passively listening. You have to do the work.
In my classes, my best successes are when I throw out the busy work, throw out the lesson where I talk 90% and then pull teeth to get students to talk 10%. I give the students a passage and tell them to rewrite it in their own words. Then rewrite it again as if opposing them in a debate. Then rewrite it again while incorporating those harsh critiques and arguments. Each time, we get deeper in the weeds when students read out their versions, picking up an idea that others run with. By the end, we’ve torn the idea to shreds, learned a lot. We haven’t graded a thing. It’s the best kind of learning to me. That’s what I want learning to be for everyone. Doing, not listening or regurgitating.
Thanks for sharing your thoughts—these are exactly the kinds of conversations we need right now as AI becomes a daily part of both responding to instruction and something much deeper -- actual learning. I’ve been seeing a lot of energy spent on trying to “catch” students using AI to get a better grade, but to me, that’s missing the bigger opportunity. Instead of policing whether students are using AI, I’d rather ask how they’re using it and what they’re actually learning in the process. And one of the most effective ways to assess that is to take time to actually ask them to tell you -- not on paper, but in a sit-down conversation.
In my own work, I’ve started thinking about student use of AI in terms of proficiency, kind of like the ACTFL proficiency scale we use in language learning. At the most basic level, a student might just copy and paste what AI gives them, which isn’t much different from copying from the back of the book. No credit for that "novice low" level of proficiency. But as students get more skilled, they start using AI to brainstorm ideas, ask deeper questions, check sources, and actually create or revise their own work. The highest level I’ve seen is when students are using AI almost like a partner—testing out ideas, comparing answers, and pushing themselves to think in new ways. That’s where the real learning happens. In fact, that's how I used it for this response.
The real shift here is about assessing student learning—not just measuring how well students respond to our instruction. In the long run, that’s so much more important. If we want students to be ready for the world beyond school, we have to help them learn how to use tools like AI thoughtfully and reflectively, not just jump through hoops for a grade.
When it comes to assessment, I'm all about proficiency, rather than grading any "product". Along with that, since we know they will (and should be) using AI as a learning tool, I’d like to see students show their work—share their prompts, reflect on how AI shaped their thinking, point out what they had to change or question, and explain where they brought their own judgment into the process -- all as part of that conversation I mentioned earlier.
We need to move beyond the factory-based mental model of schooling as akin to an assembly line. That "age" is fading fast and we are emerging full speed into the AI-Age. I recently wrote a Substack post about making sure that what goes on in school is truly “age appropriate”. If you’re interested, you can find it in my Substack posts. here: Age Inappropriate? Time to Light Some Candles. I also had an earlier post about a Proficiency-based AI guide that you can find in my posts.
Thanks for opening up this space to think differently about teaching in the age of AI. I’m eager to keep learning from others in this conversation. By the way, I (of course) used my usual AI iterative process for this reply as I alluded to above.
I love these thoughts, especially the writing fingerprinting exercise. This is full of great ideas. The only thing I might press on a little is the assertion that "As long as an assignment involves gathering information, turning it into sentences, and presenting it for evaluation, AI can replace human work." I'd say that since AIs don't synthesize text with accuracy as a goal, they can't really REPLACE human work (unless the human in question doesn't care in the least about correctness and randomly lies). No matter how many guardrails the vendors try to erect, I'm not persuaded the machines can be kept from inventing information. So it's a replacement in a way, but not a good one :-)
Thanks, Steve. I like a careful reader! And here would be my (tongue firmly in cheek though not really) response - if the human work is done in the prompt, the AI can absolutely deliver! I've seen it time and time again and am more and more amazed at what some of these tools can do. I have a Manus account and it's WAY more than just producing text - a fairly short prompt can deliver some impressive results involving a whole sequence of steps ... https://manus.im/home Just click on any one of the use cases... This is where things are headed. But you know this.
I actually wasn't aware of Manus! But I am now. Among my many concerns about LLMs is that their ubiquity will reveal or already has revealed essential hollowness across many areas of society. If students readily use AI to write essays for them, it shows they don't actually value learning to write. If potentially shallow or unreliable synthetic text is an acceptable substitute for human-made text in a variety of use cases, that suggests that depth and accuracy were never the primary goals of those use cases to begin with. Perhaps the goals were always primarily social or transactional, and the text is a kind of currency whose specific content is less than critical. Very raw ideas, a set of thoughts I'm still developing. Glad to find your newsletter!
I agree. It's enormously helpful and reassuring to find people on Substack (like yourself) who are experimenting, reflecting and thinking openly about what happens next.
Really honest, thoughtful post, Stephen. I agree completely with your point about intentionality -
about being clear and transparent (with ourselves and our students) about what we want to achieve from any given assignment. And, as you make clear, those aims will likely have to change in a world with AI - by which I mean, now! While the necessary experimentation is beginning to happen (albeit in relatively few classrooms) there will be an increasing mismatch between what we discover to be effective in the classroom and what happens in public exams (for which we also have to prepare our students).
Yes. And there has to be room for experimentation which inevitably means failure and unintended consequences. A fear I have is teachers may, based on a single bad experience, conclude that AI is then to be avoided at all costs. Substack has been an useful resource to connect with educators who are thinking about these issues more deeply than the kneejerk "reject" response which (for some understandable reasons) is the easy default. It's been useful to discover like-minded folks navigating this path together!
I agree. Sorry, I didn't mean "trap" to be a dig.
Very thoughtful, and very clever way to start -- with a writing assignment, and then have them use AI....