This is a really balanced, thoughtful (and helpful) analysis, Stephen. Thank you.
I think your point about it putting the onus on the students to self-regulate is crucial in all of this. You suggest it is unlikely and that absolutely chimes with my classroom experience.
I think the Socratic approach has merit and it's in line with the way many teachers are trying to use AI to enhance learning rather than to shortcut to a final product - but the subtext here, as you rightly point out, moves towards teacher replacement.
Fundamentally, I think humans are motivated to learn by other humans. When I'm feeling optimistic, I think that impulse will mitigate ChatGPT's educational land grab. In darker moods, I fear we will lose sight of it and be swept away!
I can't quite put my finger on it, but I'm really skeptical this will work as intended. I do agree - in theory - that the socratic model has merit. But in my experience working with students, that's not how they use AI. And once they've seen the regular version, how many are going to want to play the socratic game? And, the larger question, if it does revolutionize learning, as it's most ardent proponents suggest, what then? How does that not accelerate the replacement of teachers?
I share your scepticism here. As presented, the intentions of study mode seem worthy and, in fact, hard to argue against. You're right to probe into the subtext and, as is often the case, the reality in a classroom is rather different from the projections of a tech boss. It's why teachers have to try to drive the narrative - really hard when we're all treading water furiously just to stay afloat in a very choppy ocean
I introduced my 7th-grade social studies students to AI with the "open the pod bay doors, HAL" clip. Now, I'm teaching 10th-grade English, and many of those same 7th graders are in my English class this year. I am determined to teach them how to use AI as effectively as, or more effectively than, the Socratic method I modeled for them back then and will model again this year. Difficult? Yes. Are there more effective alternatives? Not that I can think of, but one of my first-day slides is about intellectual humility: "You don't know what you don't know." Open to any and all advice.
Sounds like you're doing good work. I think Study mode may be able to operate with students under the thoughtful guidance of an experienced teacher. The question for me is what happens when they leave the classroom. I'd love to hear how it goes.
Steve, this is terrific. I consider myself someone who attempts to stay fairly up-to-date with AI and I didn’t even realize that this new mood was being released.! I think one of your most important points is your recognition of how difficult or impossible it’s going to be for students not to toggle out of tutor mode and get the answer key. We see how much difficulty students have with regulating their use of social media and it’s crazy to expect that there will be any more capable of regulating how they use AI. It’s not even August and I already feel exhausted. lol
Thanks, Pete. I’d love to hear what you’re doing at HWL and how your faculty are approaching and handling it. I imagine you’ve faced similar issues as we have and have similar fears / concerns. I’m going to be very curious how students come back this fall- whether they will accelerate and continue to incorporate AI into their process as the “new normal” or be willing to step back and be open to slowing down and learning from teachers who have thought about it. My fear is that too many faculty are “opting out” of the conversation and allowing themselves to fall further and further behind. As historians, we know things rarely go as predicted and nothing is inevitable but based on very real human choices. Right now, the tech companies seem to be making almost all of them when it comes to AI.
I played with this mode a bit, and I was not impressed. It's essentially an advanced GPT that asks them questions to try to get at what they're trying to understand. But when I used it, it just gave me the answer. It just took longer to get there.
Seems like OpenAI's attempt to get in with K-12 and post-secondary ed before Microsoft and Google get a strong foothold. I can't even imagine what it's going to be like trying to keep up with this as a teacher, and higher ed is just going to be a free-for-all.
I agree with the points made. The ease of navigating to 'give me the answer' mode may make this superfluous. But it can have an impact on attendance in in-person classes in higher education (what's the need if I can learn from anywhere). Even well intentioned students might fall into the trap. In my experience students who need such tools the most will not have the patience to work with it and will opt for answers and develop a false sense of competency. Also, if this reduces in-person interactions between students, it is likely to make students even more hesitant to seek human interactions, having broader societal implications in the long run.
I'm hopeful this is not the case but I just haven't seen the demand for students to interact with chatbots in this way. Maybe that will change. If it was the default mode for student AI accounts I'd feel a little bit better, but the full version is just a click away.
Just incredible how BigAI is infiltrating campuses and practicing fairly invasive behavior modification while students and Colleges are being asked to pay for it. This won't end well for a lot of these young people. The State and Corporate interests should be separate from our core institutions. This is not what a democratic society would allow.
What's such a shame is there is potential opportunity but the speed with which it's happening has caught schools flatfooted they don't really have much say in the matter short of just blocking technologies from their campuses which is just not an option in modern society. I'm honestly more optimistic with the GPT-5 issues because, if it does portend a slow down in development at least it will give people a breather and catch up to what's already here. Regulation is not going to be the solution, at least not with the current administration.
I 100% agree with you that it's a strategy. Many people focus on the speed in which new releases are entering the market, but what's more interesting - and a bit easier to miss - is what exactly is being released. I published a post about it a couple of weeks ago, if you'd like to take a look (https://karozieminski.substack.com/p/founders-heres-what-you-need-to-know) 🤗 Great article, thank you for sharing!
It's not so much that I'm cynical - who knows? Maybe kids will use Study mode or whatever the other AI companies are calling it. But after being mostly silent about the impact AI was having on education, now they are actively seeking this market - by giving away the paid tiers for free and openly claiming it will help with learning when the evidence is decidedly not clear. But students will be using these tools so schools and colleges will need to adapt. I'll take a look at your piece.
Stephen, thank you for bringing this development to my attention; I work with a fleet of college writing teachers and I try to keep them apprised of AI conversations. What really stands out to me here though is your admission at the end that we are an "outdated and vulnerable educational system" (whether we like it or not). You seem both techno-skeptical and techno-optimist at the same time. I'm wondering what your vision and hopes are for AI integrated into higher education that would be useful, rather than predatory? I'm new to the blog, so perhaps you've written more about this elsewhere!
I think that's right - I am (mostly) on the techno optimist side long term (I think?), but some of the developments over the last few months have made me more techno-skeptical and nervous in the near term that schools are really prepared to deal with the scope of changes happening. Many teachers seem to be pretending it doesn't exist, either by downplaying the tech (competency deniers) or simply blaming the students and thinking harsher punishments or "bans" is the answer. What really got me about "Study Mode" is the assumption this is how most students want to use AI. Teachers are either going to have to lead the way through innovative curriculum redesign or assessment practices OR convince students at certain stages in their development (especially as writers) that an overreliance on AI may damage long term skill building. A tall order as long as grades continue to drive the process. I think it's much tougher at the college level. I work with strong HS students and we have a little more daily contact with them and (we like to think) a little more influence. There are lots and lots of interesting people writing in this space. Here is one who takes an approach that may address your question:
Thanks for your reply. Side note: We also need more hs-college conversations here. What hs teachers are doing to embrace or teach with/around AI is a complete mystery to us in the college setting. It's hard to access what our students already know and have learned (there are no standards for AI literacy that we can assume high schools have covered, nor do we have any standards for AI literacy shared across the university!) Thanks for the article - this is exactly what I've been thinking about - how we redesign instruction. We (in higher ed) are really not even ready to have this conversation, and certainly not fast enough to keep pace with the tech infiltration.
Or at least make it visible rather than buried under tools. This has dropped right before the start of fall semester. I am not sure what percentage of higher education faculty are aware of this, outside of those who have taken it upon themselves to learn more about GenAI this summer. The next academic year, which is rapidly approaching, will shine some light on the impact of these tools.
Your final line was powerful … massive changes are coming whether or not the status quo is ready. This time, the status quo doesn’t have to be ready. I also liked your metaphors about encirclement. In the past, it was always the status quo which could encircle and choke out true change. This time, the tables have been turned.
For the first time I think the writing is on the wall about the end of “the age of schooling”. The tsunami that is approaching is that of “the age of learning” and unfortunately there will be many casualties. But it’s not like there haven’t been warnings for a very long time. The intentions in the age of schooling were noble and spawned a huge ecosystem, but . . . It will be devastating for teachers and students, but wonderful for learners, learning and those who help that happen for kids in schools.
I agree that calling this new feature 'Study and Learn' seems a bit predatory, and I do wonder how many students would really opt for this more 'restrained' mode vs the regular, 'give you the answer straight away' mode if all it takes is a click for them to toggle between. But then again I think back on the days of yore when I had to rely more on online resources like SparkNotes (!) and LitCharts than my teachers, who often had limited capacity to give me the depth of interpretation I wanted for my study of literary texts, and I don't think what's happening with the GPT 'Study' mode launch is quite so different in logic. AI or not, students have and always will seek better alternatives for their learning if they are motivated enough to do so in the first place. Those who don't care enough will probably not even bother with using 'Study' mode (or AI, tbh!)
It's not so much the regular version gives you the answer. Since students have been using AI in the more open-ended way, I can imagine most will bypass a model that forces them to answer questions in response in order to get the information they are after quickly. It's also quite possible to engage in excellent socratic dialogue with the regular default mode. I don't know. There is something about this rollout that feels more like marketing and reaction to criticism. It's too little too late in my view.
Yes fair points all! Maybe it's more cosmetic than substance with a nice-sounding wrapper of 'Study and Learn' right now. But perhaps (and hopefully) they will improve the mode and somehow incorporate real pedagogical 'nudges' to make students learn and not just rely on it for quick solutions.
Let’s hope so. I’m usually not such a cynic. I have high hopes for AI long term. But this feels a little like a flanking maneuver that will put more faculty on the defensive and make it harder to keep AI out of the classroom.
This is a good point. I've seen little coverage or discussion of how much data OpenAI is able to ingest through its users - I know you can technically turn "off" the training by toggling the "Improve the Model for Everyone" setting, but how many people know how to do that? And do we all believe it? I think they think they are doing the right thing as far as education. In other words, I don't think they are not interested in students learning. But is it a priority or are they experts in how to do that? No. It's easy to see why trust is eroding.
Isn't the real motive to scrape data for free, as usual? Aren't students really just the equivalent of the terribly paid "human feedback" workers in 3rd world countries? That they are really interested in education is doubtful.
This is a really balanced, thoughtful (and helpful) analysis, Stephen. Thank you.
I think your point about it putting the onus on the students to self-regulate is crucial in all of this. You suggest it is unlikely and that absolutely chimes with my classroom experience.
I think the Socratic approach has merit and it's in line with the way many teachers are trying to use AI to enhance learning rather than to shortcut to a final product - but the subtext here, as you rightly point out, moves towards teacher replacement.
Fundamentally, I think humans are motivated to learn by other humans. When I'm feeling optimistic, I think that impulse will mitigate ChatGPT's educational land grab. In darker moods, I fear we will lose sight of it and be swept away!
I can't quite put my finger on it, but I'm really skeptical this will work as intended. I do agree - in theory - that the socratic model has merit. But in my experience working with students, that's not how they use AI. And once they've seen the regular version, how many are going to want to play the socratic game? And, the larger question, if it does revolutionize learning, as it's most ardent proponents suggest, what then? How does that not accelerate the replacement of teachers?
I share your scepticism here. As presented, the intentions of study mode seem worthy and, in fact, hard to argue against. You're right to probe into the subtext and, as is often the case, the reality in a classroom is rather different from the projections of a tech boss. It's why teachers have to try to drive the narrative - really hard when we're all treading water furiously just to stay afloat in a very choppy ocean
I introduced my 7th-grade social studies students to AI with the "open the pod bay doors, HAL" clip. Now, I'm teaching 10th-grade English, and many of those same 7th graders are in my English class this year. I am determined to teach them how to use AI as effectively as, or more effectively than, the Socratic method I modeled for them back then and will model again this year. Difficult? Yes. Are there more effective alternatives? Not that I can think of, but one of my first-day slides is about intellectual humility: "You don't know what you don't know." Open to any and all advice.
Sounds like you're doing good work. I think Study mode may be able to operate with students under the thoughtful guidance of an experienced teacher. The question for me is what happens when they leave the classroom. I'd love to hear how it goes.
Steve, this is terrific. I consider myself someone who attempts to stay fairly up-to-date with AI and I didn’t even realize that this new mood was being released.! I think one of your most important points is your recognition of how difficult or impossible it’s going to be for students not to toggle out of tutor mode and get the answer key. We see how much difficulty students have with regulating their use of social media and it’s crazy to expect that there will be any more capable of regulating how they use AI. It’s not even August and I already feel exhausted. lol
Thanks, Pete. I’d love to hear what you’re doing at HWL and how your faculty are approaching and handling it. I imagine you’ve faced similar issues as we have and have similar fears / concerns. I’m going to be very curious how students come back this fall- whether they will accelerate and continue to incorporate AI into their process as the “new normal” or be willing to step back and be open to slowing down and learning from teachers who have thought about it. My fear is that too many faculty are “opting out” of the conversation and allowing themselves to fall further and further behind. As historians, we know things rarely go as predicted and nothing is inevitable but based on very real human choices. Right now, the tech companies seem to be making almost all of them when it comes to AI.
I played with this mode a bit, and I was not impressed. It's essentially an advanced GPT that asks them questions to try to get at what they're trying to understand. But when I used it, it just gave me the answer. It just took longer to get there.
I agree. Definitely feels like more of a PR and tactical move than anything truly unique.
Seems like OpenAI's attempt to get in with K-12 and post-secondary ed before Microsoft and Google get a strong foothold. I can't even imagine what it's going to be like trying to keep up with this as a teacher, and higher ed is just going to be a free-for-all.
I agree with the points made. The ease of navigating to 'give me the answer' mode may make this superfluous. But it can have an impact on attendance in in-person classes in higher education (what's the need if I can learn from anywhere). Even well intentioned students might fall into the trap. In my experience students who need such tools the most will not have the patience to work with it and will opt for answers and develop a false sense of competency. Also, if this reduces in-person interactions between students, it is likely to make students even more hesitant to seek human interactions, having broader societal implications in the long run.
I'm hopeful this is not the case but I just haven't seen the demand for students to interact with chatbots in this way. Maybe that will change. If it was the default mode for student AI accounts I'd feel a little bit better, but the full version is just a click away.
Just incredible how BigAI is infiltrating campuses and practicing fairly invasive behavior modification while students and Colleges are being asked to pay for it. This won't end well for a lot of these young people. The State and Corporate interests should be separate from our core institutions. This is not what a democratic society would allow.
What's such a shame is there is potential opportunity but the speed with which it's happening has caught schools flatfooted they don't really have much say in the matter short of just blocking technologies from their campuses which is just not an option in modern society. I'm honestly more optimistic with the GPT-5 issues because, if it does portend a slow down in development at least it will give people a breather and catch up to what's already here. Regulation is not going to be the solution, at least not with the current administration.
I 100% agree with you that it's a strategy. Many people focus on the speed in which new releases are entering the market, but what's more interesting - and a bit easier to miss - is what exactly is being released. I published a post about it a couple of weeks ago, if you'd like to take a look (https://karozieminski.substack.com/p/founders-heres-what-you-need-to-know) 🤗 Great article, thank you for sharing!
It's not so much that I'm cynical - who knows? Maybe kids will use Study mode or whatever the other AI companies are calling it. But after being mostly silent about the impact AI was having on education, now they are actively seeking this market - by giving away the paid tiers for free and openly claiming it will help with learning when the evidence is decidedly not clear. But students will be using these tools so schools and colleges will need to adapt. I'll take a look at your piece.
Stephen, thank you for bringing this development to my attention; I work with a fleet of college writing teachers and I try to keep them apprised of AI conversations. What really stands out to me here though is your admission at the end that we are an "outdated and vulnerable educational system" (whether we like it or not). You seem both techno-skeptical and techno-optimist at the same time. I'm wondering what your vision and hopes are for AI integrated into higher education that would be useful, rather than predatory? I'm new to the blog, so perhaps you've written more about this elsewhere!
I think that's right - I am (mostly) on the techno optimist side long term (I think?), but some of the developments over the last few months have made me more techno-skeptical and nervous in the near term that schools are really prepared to deal with the scope of changes happening. Many teachers seem to be pretending it doesn't exist, either by downplaying the tech (competency deniers) or simply blaming the students and thinking harsher punishments or "bans" is the answer. What really got me about "Study Mode" is the assumption this is how most students want to use AI. Teachers are either going to have to lead the way through innovative curriculum redesign or assessment practices OR convince students at certain stages in their development (especially as writers) that an overreliance on AI may damage long term skill building. A tall order as long as grades continue to drive the process. I think it's much tougher at the college level. I work with strong HS students and we have a little more daily contact with them and (we like to think) a little more influence. There are lots and lots of interesting people writing in this space. Here is one who takes an approach that may address your question:
https://nickpotkalitsky.substack.com/p/dont-ban-ai-redesign-instruction
Thanks for your reply. Side note: We also need more hs-college conversations here. What hs teachers are doing to embrace or teach with/around AI is a complete mystery to us in the college setting. It's hard to access what our students already know and have learned (there are no standards for AI literacy that we can assume high schools have covered, nor do we have any standards for AI literacy shared across the university!) Thanks for the article - this is exactly what I've been thinking about - how we redesign instruction. We (in higher ed) are really not even ready to have this conversation, and certainly not fast enough to keep pace with the tech infiltration.
Yup. HS-College collaboration is a fantastic idea! I'd attend that conference!
Or at least make it visible rather than buried under tools. This has dropped right before the start of fall semester. I am not sure what percentage of higher education faculty are aware of this, outside of those who have taken it upon themselves to learn more about GenAI this summer. The next academic year, which is rapidly approaching, will shine some light on the impact of these tools.
Your final line was powerful … massive changes are coming whether or not the status quo is ready. This time, the status quo doesn’t have to be ready. I also liked your metaphors about encirclement. In the past, it was always the status quo which could encircle and choke out true change. This time, the tables have been turned.
For the first time I think the writing is on the wall about the end of “the age of schooling”. The tsunami that is approaching is that of “the age of learning” and unfortunately there will be many casualties. But it’s not like there haven’t been warnings for a very long time. The intentions in the age of schooling were noble and spawned a huge ecosystem, but . . . It will be devastating for teachers and students, but wonderful for learners, learning and those who help that happen for kids in schools.
I agree that calling this new feature 'Study and Learn' seems a bit predatory, and I do wonder how many students would really opt for this more 'restrained' mode vs the regular, 'give you the answer straight away' mode if all it takes is a click for them to toggle between. But then again I think back on the days of yore when I had to rely more on online resources like SparkNotes (!) and LitCharts than my teachers, who often had limited capacity to give me the depth of interpretation I wanted for my study of literary texts, and I don't think what's happening with the GPT 'Study' mode launch is quite so different in logic. AI or not, students have and always will seek better alternatives for their learning if they are motivated enough to do so in the first place. Those who don't care enough will probably not even bother with using 'Study' mode (or AI, tbh!)
It's not so much the regular version gives you the answer. Since students have been using AI in the more open-ended way, I can imagine most will bypass a model that forces them to answer questions in response in order to get the information they are after quickly. It's also quite possible to engage in excellent socratic dialogue with the regular default mode. I don't know. There is something about this rollout that feels more like marketing and reaction to criticism. It's too little too late in my view.
Yes fair points all! Maybe it's more cosmetic than substance with a nice-sounding wrapper of 'Study and Learn' right now. But perhaps (and hopefully) they will improve the mode and somehow incorporate real pedagogical 'nudges' to make students learn and not just rely on it for quick solutions.
Let’s hope so. I’m usually not such a cynic. I have high hopes for AI long term. But this feels a little like a flanking maneuver that will put more faculty on the defensive and make it harder to keep AI out of the classroom.
This is a good point. I've seen little coverage or discussion of how much data OpenAI is able to ingest through its users - I know you can technically turn "off" the training by toggling the "Improve the Model for Everyone" setting, but how many people know how to do that? And do we all believe it? I think they think they are doing the right thing as far as education. In other words, I don't think they are not interested in students learning. But is it a priority or are they experts in how to do that? No. It's easy to see why trust is eroding.
Isn't the real motive to scrape data for free, as usual? Aren't students really just the equivalent of the terribly paid "human feedback" workers in 3rd world countries? That they are really interested in education is doubtful.