Great piece! I found it really thoughtful (as always), but I want to push back on one assumption:
‘I don’t think using AI is as much of a skill to build as other things, like learning how to research and write by yourself… What’s difficult is to build those fundamental skills.’
This assumes research is a ‘fundamental skill’ that exists independently of tools. But research is a material practice. When your student colour-codes sources and leaves marginal notes, that’s not making her thinking visible—that’s thinking itself. The annotations, the categories, the organisation are constitutive of thought, not representations of it.
Peter Damerow spent his career showing that knowledge production is always material—most people who dig into where knowledge comes from end up at his feet. You can’t learn to research 'by yourself,' only with some material configuration: notecards, highlighters, databases, citation managers. The skill develops through engagement with specific tools, not prior to them.
You don’t think around whiteboards, you think with them. Same for AI in research.
Research-as-practiced-with-notecards is actually a different cognitive activity than research-as-practiced-with-AI, not the same activity done with different aids. So the real question isn’t ’should students avoid AI to learn fundamentals first?’ It’s ‘which material practices enable the most generative thinking?’ There is a difference between using a calculator to skip long division and using AI to ’summarise’ a text you haven't read. In the former, you offload a routine task; in the latter, you offload the encounter with the material itself.
But this is where AI gets interesting: Boden showed that AI excels at combinational and exploratory creativity—rapidly testing frameworks, surfacing connections, enumerating possibilities. That’s valuable material practice. But transformational creativity—the fundamental reframing when points in your argument are stretched and inconsistent—still requires your engagement with the sources themselves. AI can help you stress-test categories; it can’t generate the new impressions that come from sustained contact with the material.
In this view, research isn’t a ‘fundamental skill’ we learn so we can use tools later; it is a skill that emerges from our friction with tools. When we tell students to avoid AI to learn ‘the fundamentals,’ we’re often just asking them to use 20th-century tools (highlighters/notecards) instead of 21st-century ones. A more productive debate might be: ‘Which tools provide the right kind of friction to spark transformational creativity, and which ones smooth over the thinking until it disappears?’
I'll check it out - but here is where I would push back a little. Suggesting "When your student colour-codes sources and leaves marginal notes, that’s not making her thinking visible—that’s thinking itself" is a bit of semantics - the point about the visible thinking is the color-codes and notes are external representations others can observe. "Thinking" itself cannot be seen, right? Unfortunately, while I totally agree that one can think very effectively with AI, the way I've seen most younger people do it is not the way Damerow envisions. I'm sorry, but thinking with AI is not the same as thinking with White Boards. You write all your own work on white boards - not the case with AI.
I agree on the chasm between Ai in student practice vs what is possible. However, I think that the distinction I make is important as it leads you to different (and I think more productive) solution spaces. Students use AI poorly because we’re treating it as an optional add-on rather than understanding how representational systems restructure cognition.
For example, when students annotate with highlighters vs. when they use AI to generate topic clusters from sources they’ve read—these aren’t the same cognitive activity with different tools, they’re different activities that develop different capabilities. The question then becomes which capabilities do we want to develop, and what material configurations enable them?
A good question to frame engagement with Damerow is to ask: why are students using AI poorly?
Given that, a good entry point might be:
The Origins of Writing as a Problem of Historical Epistemology
(1988, often reprinted)
Why? Because Damerow isn’t just talking about cuneiform as a historical curiosity. He shows how early accounting tokens and numerical marks restructured cognitive possibilities. Writing didn’t just record thought; it reorganised what could be thought.
• They scaffold forms of reasoning that would otherwise be impossible.
It makes the constitutive argument concrete: clay tokens → grouping → abstraction → number concept. You can practically see the cognitive shift.
Pedagogically, it’s gold because it implies:
• The materials we give students shape the concepts they can form.
• Representational systems are developmental engines.
• ‘Tools for thinking’ are not optional supports—they generate forms of thought.
The essay's three-part framework is directly transferable to educational settings
• Coding structures: How do your classroom representational systems actually work? (Not how do they represent concepts, but what operations do they enable?)
• Social context: What are the "administrative needs" of your learning environment? (Assessment, accountability, communication, record-keeping?)
• Historical dynamics: How do tools and practices co-evolve in your classroom? (The "co-evolution of proto-cuneiform with certain arithmetical notions" has direct parallels to how, say, graphing calculators reshape what "algebra" means)
This is very interesting and if I had more time I'd have more to add - but my one immediate thought that jumps out to this question - why are students using AI poorly? It's not just students. The vast majority of people are using AI poorly - that's why it will be so difficult to teach it well.
The telephone didn’t just enable people to express pre-existing ‘conversation skills.’ The technology created entirely new cognitive and social demands: managing conversation without visual cues, negotiating overlapping speech, developing opening/closing rituals (‘hello?’ / ‘well, I should let you go’), understanding when silence means technical failure vs. thoughtful pause.
We’re maybe 2-3 years into mass AI use, and still in the awkward early-telephone phase, working out the conventions through trial and error. The telephone restructured cognition in significant ways: distributed memory through address books, changed spatial reasoning about social networks, etc. We should expect the same with LLMs.
Schools could be laboratories for developing good AI material practices, not bunkers waiting until someone else figures it out. The challenge isharder than it was for telephones—you’re trying to help students develop productive AI practices while preserving the deep reading and sustained analytical engagement that current AI use often eliminates. That’s why I believe that the material design question matters so much
This is actually a great analogy. For the record, I agree that we are barely into the phase of mass use of LLMs and generative AI - 5 years from now is going to look very different in some ways while same old problems in others. But 8 years of student access to LLMs will give us some data on impact and, I fear, that, much like social media, the educational response (and general lack thereof) means it's not likely to be good. What I resent is the experimentation on students. Not learning how to talk on the phone would not necessarily stunt your development (and it really wasn't a huge stretch to learn how to do ...), but LLMs are different. Schools as laboratories given this technology does not give me a lot of confidence. But I do think we will figure it out somehow.
* we don't know what to do but we need to do *something*
* *anything* we do could have unfortunate consequences, no matter our intentions
It's not that we *want* to experiment on kids, but we *are* experimenting on kids as anything we do is an experiment.
Your social media comparison points to something I've been thinking about a lot—I've been drafting an essay on how we can't know what technology is without deploying it, but some tradeoffs only become visible after irreversible commitments. If you're interested, I'd be happy to share the draft. It might give us an alternative lens to 'experiment vs. protect'.
This was a really good piece. I especially enjoyed the familial example of your father doggedly organizing bits and pieces of the law into an unimpeachable mini library for each case. I also appreciate the perspective of your bright student, whose writing process was not too far off from my own in college. But I think your piece highlighted two concerns for me that I have yet to see addressed to my satisfaction: 1) what if our assumption, that students need to learn how to write a certain way to create a certain kind of output, is wrong or at least will be rendered obsolete thanks to AI? I used to assign 20 page research papers, assuming (wrongly) that this is what students need to do. But this is what *I* needed to learn how to do to become a political scientist in the analog era of long-form political science writing (and now even some journals are allowing AI use!). It is not what most adults actually do. Will cultivating this skill continue to translate to the workforce? I don't know, but I'm not convinced it will be helpful even within the next 5 years. Other skills, like adaptability and the ability to teach yourself, might become vital as technology supercharges our workflows and communications. 2) why must we invade our in-class time with practice in the craft of research and writing? Whither content? Most of the classes I teach at the college level are not introducing skills; they are reinforcing them while introducing content (i.e., theories of international relations). I think we need to actually front load our curricula with skills-based courses so that we can spend our time on the content of each individual course, not the skills we believe will be necessary upon graduation. This would allow us to apply discipline specific AI literacy (DSAIL) eventually without forsaking crucial classroom time where we interrogate, explore, and unpack concepts and theories central to the discipline.
Thanks for the thoughtful comment and you raise provocative questions. I don’t think many teachers are ready (and I confess I am one of them at the moment) to give up on the research paper. As for your claim that’s not what people do, I would beg to differ in some respects - legal briefs, memos, and opinion letters are all forms of “research papers" - in business, reports and and other written and digital documents are still a staple of how information is communicated and digested. Now, how those outputs will generally be produced going forward is a legitimate question, but I find it hard to believe that the ability to sit down and compose a coherent sentence is not going to continue to be a marketable skill. Maybe not in the same way as pre 2022, but human judgment and nuance will still be important. The ability to proof and edit AI work will definitely be important. LLMs have definitely lowered the bar for folks who have great ideas though never mastered writing to the extent where they felt confident communicating. The truth is I think it’s going to be very difficult to game out the next 2 years, let alone 5.
Can judgment, patience, perseverance, and skepticism be taught? Or more specifically, can college professors teach these things, particularly in large courses or general education classes where enthusiasm is already low? I'm not sure even the best teachers know how. It seems many of these qualities are deeply embedded in a person way, way before they ever set foot on campus. Most professors are subject masters, but they have far too little understanding of how to develop these talents that are more about personality and less about intellectual growth. Is there any research that shows any meaningful interventions in these areas? I really don't know.
It’s a great question and I would say not easily. Those skills need to be developed earlier and the huge question is whether that can be done prior to AI use or possibly in tandem. That’s where many of the discussions diverge.
Hi Stephen, we’ve actually built the only tool available, from the ground up, that grants students and their parents the ability to make what’s invisible, visible - in real time. Addressing the missing piece between cognition and metacognition, and providing the feedback loops to positively reinforce these desirable traits. Check it out! https://vimeo.com/1103662449
I recently returned from presenting at The Westmark School in LA. One of my slides was explicitly titled: Making the Invisible Visible. I’d love to show you how it all works.
The distinction between "AI-proofing" and "making thinking visible" is exactly right. And your student's insight — that AI is most dangerous at the brainstorming stage because it blurs the line between your ideas and its output — is something I haven't seen articulated that clearly anywhere else.
This connects to something I've been exploring from the other direction. We built an AI platform (chumi.io) where students can have conversations with historical figures — Galileo, Frederick Douglass, Cleopatra — each grounded in their actual writings and historical context. The design is deliberately structured so the student has to do the thinking: formulate the questions, challenge the figure's claims, connect what they hear to what they already know.
What's interesting is that the conversation transcript itself becomes a form of visible thinking. A teacher can see exactly what a student asked, how they followed up, whether they pushed back or just accepted answers. It's not a product the student hands in — it's a real-time record of their reasoning process.
PEG's point about "which tools provide the right kind of friction" resonates here. The friction in talking to a historical figure is that you have to know enough to ask good questions — and that's a very different cognitive demand than asking ChatGPT to summarize a source for you.
This is fascinating, thank you. Reading the comments below about having a 'glass half full' attitude, I wonder if AI could inadvertently develop students' thinking processes, as teachers are forced to confront the issue of relying on the product rather than observing the process. As a primary teacher, I instinctively feel this focus is positive for students of all ages. We try to do this in maths when we set 'reasoning' problems, but younger students could also attempt more visible thinking in the humanities.
Thank you for sharing your wisdom and, as always, for wrapping it in such a beautiful story.
Great piece! I found it really thoughtful (as always), but I want to push back on one assumption:
‘I don’t think using AI is as much of a skill to build as other things, like learning how to research and write by yourself… What’s difficult is to build those fundamental skills.’
This assumes research is a ‘fundamental skill’ that exists independently of tools. But research is a material practice. When your student colour-codes sources and leaves marginal notes, that’s not making her thinking visible—that’s thinking itself. The annotations, the categories, the organisation are constitutive of thought, not representations of it.
Peter Damerow spent his career showing that knowledge production is always material—most people who dig into where knowledge comes from end up at his feet. You can’t learn to research 'by yourself,' only with some material configuration: notecards, highlighters, databases, citation managers. The skill develops through engagement with specific tools, not prior to them.
You don’t think around whiteboards, you think with them. Same for AI in research.
Research-as-practiced-with-notecards is actually a different cognitive activity than research-as-practiced-with-AI, not the same activity done with different aids. So the real question isn’t ’should students avoid AI to learn fundamentals first?’ It’s ‘which material practices enable the most generative thinking?’ There is a difference between using a calculator to skip long division and using AI to ’summarise’ a text you haven't read. In the former, you offload a routine task; in the latter, you offload the encounter with the material itself.
But this is where AI gets interesting: Boden showed that AI excels at combinational and exploratory creativity—rapidly testing frameworks, surfacing connections, enumerating possibilities. That’s valuable material practice. But transformational creativity—the fundamental reframing when points in your argument are stretched and inconsistent—still requires your engagement with the sources themselves. AI can help you stress-test categories; it can’t generate the new impressions that come from sustained contact with the material.
In this view, research isn’t a ‘fundamental skill’ we learn so we can use tools later; it is a skill that emerges from our friction with tools. When we tell students to avoid AI to learn ‘the fundamentals,’ we’re often just asking them to use 20th-century tools (highlighters/notecards) instead of 21st-century ones. A more productive debate might be: ‘Which tools provide the right kind of friction to spark transformational creativity, and which ones smooth over the thinking until it disappears?’
I can’t recommend Damerow highly enough here.
I'll check it out - but here is where I would push back a little. Suggesting "When your student colour-codes sources and leaves marginal notes, that’s not making her thinking visible—that’s thinking itself" is a bit of semantics - the point about the visible thinking is the color-codes and notes are external representations others can observe. "Thinking" itself cannot be seen, right? Unfortunately, while I totally agree that one can think very effectively with AI, the way I've seen most younger people do it is not the way Damerow envisions. I'm sorry, but thinking with AI is not the same as thinking with White Boards. You write all your own work on white boards - not the case with AI.
I agree on the chasm between Ai in student practice vs what is possible. However, I think that the distinction I make is important as it leads you to different (and I think more productive) solution spaces. Students use AI poorly because we’re treating it as an optional add-on rather than understanding how representational systems restructure cognition.
For example, when students annotate with highlighters vs. when they use AI to generate topic clusters from sources they’ve read—these aren’t the same cognitive activity with different tools, they’re different activities that develop different capabilities. The question then becomes which capabilities do we want to develop, and what material configurations enable them?
A good question to frame engagement with Damerow is to ask: why are students using AI poorly?
Given that, a good entry point might be:
The Origins of Writing as a Problem of Historical Epistemology
(1988, often reprinted)
Why? Because Damerow isn’t just talking about cuneiform as a historical curiosity. He shows how early accounting tokens and numerical marks restructured cognitive possibilities. Writing didn’t just record thought; it reorganised what could be thought.
For education, this should immediately highlight:
• External representations aren’t neutral carriers.
• They change what counts as a problem.
• They scaffold forms of reasoning that would otherwise be impossible.
It makes the constitutive argument concrete: clay tokens → grouping → abstraction → number concept. You can practically see the cognitive shift.
Pedagogically, it’s gold because it implies:
• The materials we give students shape the concepts they can form.
• Representational systems are developmental engines.
• ‘Tools for thinking’ are not optional supports—they generate forms of thought.
The essay's three-part framework is directly transferable to educational settings
• Coding structures: How do your classroom representational systems actually work? (Not how do they represent concepts, but what operations do they enable?)
• Social context: What are the "administrative needs" of your learning environment? (Assessment, accountability, communication, record-keeping?)
• Historical dynamics: How do tools and practices co-evolve in your classroom? (The "co-evolution of proto-cuneiform with certain arithmetical notions" has direct parallels to how, say, graphing calculators reshape what "algebra" means)
Enjoy! And love to hear your thoughts.
This is very interesting and if I had more time I'd have more to add - but my one immediate thought that jumps out to this question - why are students using AI poorly? It's not just students. The vast majority of people are using AI poorly - that's why it will be so difficult to teach it well.
I’ll be a bit glass half full here.
The telephone didn’t just enable people to express pre-existing ‘conversation skills.’ The technology created entirely new cognitive and social demands: managing conversation without visual cues, negotiating overlapping speech, developing opening/closing rituals (‘hello?’ / ‘well, I should let you go’), understanding when silence means technical failure vs. thoughtful pause.
We’re maybe 2-3 years into mass AI use, and still in the awkward early-telephone phase, working out the conventions through trial and error. The telephone restructured cognition in significant ways: distributed memory through address books, changed spatial reasoning about social networks, etc. We should expect the same with LLMs.
Schools could be laboratories for developing good AI material practices, not bunkers waiting until someone else figures it out. The challenge isharder than it was for telephones—you’re trying to help students develop productive AI practices while preserving the deep reading and sustained analytical engagement that current AI use often eliminates. That’s why I believe that the material design question matters so much
This is actually a great analogy. For the record, I agree that we are barely into the phase of mass use of LLMs and generative AI - 5 years from now is going to look very different in some ways while same old problems in others. But 8 years of student access to LLMs will give us some data on impact and, I fear, that, much like social media, the educational response (and general lack thereof) means it's not likely to be good. What I resent is the experimentation on students. Not learning how to talk on the phone would not necessarily stunt your development (and it really wasn't a huge stretch to learn how to do ...), but LLMs are different. Schools as laboratories given this technology does not give me a lot of confidence. But I do think we will figure it out somehow.
There's a tension there that we can't escape:
* we don't know what to do but we need to do *something*
* *anything* we do could have unfortunate consequences, no matter our intentions
It's not that we *want* to experiment on kids, but we *are* experimenting on kids as anything we do is an experiment.
Your social media comparison points to something I've been thinking about a lot—I've been drafting an essay on how we can't know what technology is without deploying it, but some tradeoffs only become visible after irreversible commitments. If you're interested, I'd be happy to share the draft. It might give us an alternative lens to 'experiment vs. protect'.
This was a really good piece. I especially enjoyed the familial example of your father doggedly organizing bits and pieces of the law into an unimpeachable mini library for each case. I also appreciate the perspective of your bright student, whose writing process was not too far off from my own in college. But I think your piece highlighted two concerns for me that I have yet to see addressed to my satisfaction: 1) what if our assumption, that students need to learn how to write a certain way to create a certain kind of output, is wrong or at least will be rendered obsolete thanks to AI? I used to assign 20 page research papers, assuming (wrongly) that this is what students need to do. But this is what *I* needed to learn how to do to become a political scientist in the analog era of long-form political science writing (and now even some journals are allowing AI use!). It is not what most adults actually do. Will cultivating this skill continue to translate to the workforce? I don't know, but I'm not convinced it will be helpful even within the next 5 years. Other skills, like adaptability and the ability to teach yourself, might become vital as technology supercharges our workflows and communications. 2) why must we invade our in-class time with practice in the craft of research and writing? Whither content? Most of the classes I teach at the college level are not introducing skills; they are reinforcing them while introducing content (i.e., theories of international relations). I think we need to actually front load our curricula with skills-based courses so that we can spend our time on the content of each individual course, not the skills we believe will be necessary upon graduation. This would allow us to apply discipline specific AI literacy (DSAIL) eventually without forsaking crucial classroom time where we interrogate, explore, and unpack concepts and theories central to the discipline.
Thanks for the thoughtful comment and you raise provocative questions. I don’t think many teachers are ready (and I confess I am one of them at the moment) to give up on the research paper. As for your claim that’s not what people do, I would beg to differ in some respects - legal briefs, memos, and opinion letters are all forms of “research papers" - in business, reports and and other written and digital documents are still a staple of how information is communicated and digested. Now, how those outputs will generally be produced going forward is a legitimate question, but I find it hard to believe that the ability to sit down and compose a coherent sentence is not going to continue to be a marketable skill. Maybe not in the same way as pre 2022, but human judgment and nuance will still be important. The ability to proof and edit AI work will definitely be important. LLMs have definitely lowered the bar for folks who have great ideas though never mastered writing to the extent where they felt confident communicating. The truth is I think it’s going to be very difficult to game out the next 2 years, let alone 5.
Can judgment, patience, perseverance, and skepticism be taught? Or more specifically, can college professors teach these things, particularly in large courses or general education classes where enthusiasm is already low? I'm not sure even the best teachers know how. It seems many of these qualities are deeply embedded in a person way, way before they ever set foot on campus. Most professors are subject masters, but they have far too little understanding of how to develop these talents that are more about personality and less about intellectual growth. Is there any research that shows any meaningful interventions in these areas? I really don't know.
It’s a great question and I would say not easily. Those skills need to be developed earlier and the huge question is whether that can be done prior to AI use or possibly in tandem. That’s where many of the discussions diverge.
As long as there is a profit motive in AI, it is an existential threat to our own intelligence.
I think it may be an existential threat regardless of whether there is a profit motive!
Hi Stephen, we’ve actually built the only tool available, from the ground up, that grants students and their parents the ability to make what’s invisible, visible - in real time. Addressing the missing piece between cognition and metacognition, and providing the feedback loops to positively reinforce these desirable traits. Check it out! https://vimeo.com/1103662449
I recently returned from presenting at The Westmark School in LA. One of my slides was explicitly titled: Making the Invisible Visible. I’d love to show you how it all works.
Applicable to higher ed as well.
The distinction between "AI-proofing" and "making thinking visible" is exactly right. And your student's insight — that AI is most dangerous at the brainstorming stage because it blurs the line between your ideas and its output — is something I haven't seen articulated that clearly anywhere else.
This connects to something I've been exploring from the other direction. We built an AI platform (chumi.io) where students can have conversations with historical figures — Galileo, Frederick Douglass, Cleopatra — each grounded in their actual writings and historical context. The design is deliberately structured so the student has to do the thinking: formulate the questions, challenge the figure's claims, connect what they hear to what they already know.
What's interesting is that the conversation transcript itself becomes a form of visible thinking. A teacher can see exactly what a student asked, how they followed up, whether they pushed back or just accepted answers. It's not a product the student hands in — it's a real-time record of their reasoning process.
PEG's point about "which tools provide the right kind of friction" resonates here. The friction in talking to a historical figure is that you have to know enough to ask good questions — and that's a very different cognitive demand than asking ChatGPT to summarize a source for you.
This is fascinating, thank you. Reading the comments below about having a 'glass half full' attitude, I wonder if AI could inadvertently develop students' thinking processes, as teachers are forced to confront the issue of relying on the product rather than observing the process. As a primary teacher, I instinctively feel this focus is positive for students of all ages. We try to do this in maths when we set 'reasoning' problems, but younger students could also attempt more visible thinking in the humanities.