How Do We Make Thinking Visible?
“How Do We AI-Proof Our Assignments?” Is the Wrong Question
My father was a top litigator, member of the American College of Trial Lawyers, and feared opponent in the courtroom. I was too young to understand everything he was doing professionally, but my memories of his work habits are vivid. After a 10-12 hour day in Manhattan, he’d be home in his favorite chair with a brief in his lap, surrounded by memos, files, and accordion folders, preparing for a case.
He was born in 1925, a member of the Greatest Generation, and his career spanned the postwar years through the early 2000s. Despite representing cutting-edge technology clients, my father never owned a computer, a cell phone, or a laptop. He was probably the only attorney alive when he passed in 2011 who did not have an email address in Martindale-Hubbell1. He was strictly analog. Everything he did was by hand on legal notepads or markups of printed material.
His work ethic was legendary. As I got older - even attending law school before pivoting to teaching - my appreciation of his career deepened. I began to hear stories from former clients, older family members, and even opposing attorneys about his most famous cases. One of the traits that set my father apart was his ability to locate virtually any document at a moment’s notice during the pressure of a trial.
To appreciate how difficult this was in, say, 1980, imagine a case involving thousands of pages of documents, any one of which might become critical during a witness’s testimony. Before the age of digitization, lawyers brought in hand trucks of bankers boxes with folders bursting at the seams. My father was known for many things in the courtroom, but a signature skill that made him so formidable was a meticulous filing system he had built over decades that allowed him to find the precise document at exactly the right time.
“When Joe came into court and we saw how he organized the case with that level of detail,” one older attorney told me, “we knew we were in trouble.”
I miss my father for lots of reasons, but those work habits came back to me recently when I sat down with one of our strongest students to talk about how she actually does research - part of my preparation for our history department's effort to redesign the way we teach it.. Her approach mirrored much of how my father operated decades earlier, and it reminded me of a concept I first encountered at Harvard's Project Zero: "Visible Thinking."
The Question We Started With
The impulse for the assessment redesign was obvious.
AI can now generate an outline in seconds. It can produce a thesis statement, identify sources, summarize dense academic articles, and write polished prose that is, for most practical purposes, undetectable. I’ve spent the better part of two years teasing out every stage of the research process - from topic selection through final editing - and at virtually every step, AI has the potential to either augment or completely derail the cognitive work the assignment is supposed to teach.
AI itself is not the primary culprit. User intent, AI fluency, discernment, and domain expertise determine whether AI adds value or replaces thinking.
The problem is that mastering these skills requires the kind of discipline and judgment most teenagers haven’t developed yet.
And it’s not just a high school issue. My sixth-grade daughter recently came home and told me she and a friend are entering an “AI Live”2 competition in which middle schoolers create a piece of performance art conveying a message about their relationship with technology, documenting how they’ve used AI in the process. Students as young as fifth grade are using AI wrappers to help structure their essays. I’ve already documented the influence of AI in the middle school debate community, and research databases like JSTOR are integrating AI into their search tools.
As Hemingway once observed about bankruptcy, AI is infiltrating every corner of the school experience - gradually in 2024 and 2025, and now suddenly in early 2026.
This is the conversation every school is having right now, and it almost always starts in the same place.
So when our department sat down to talk about the research paper, the instinct was natural and understandable: how do we make this assignment AI-proof?
But that turns out to be the wrong question. Or at least, an incomplete one. Because once we started talking seriously about what AI could compromise, we found ourselves in a much more interesting conversation - one that had less to do with technology and more to do with what we actually believed a research paper was for. The question that proved more productive was deceptively straightforward: what are the non-negotiable thinking skills we want students to develop through this process, and how can we feel confident that they’ve developed them?
What One of Our Strongest Students Already Knew
In anticipation of our meeting, I sat down with one of our strongest research students - a senior who had written outstanding papers across multiple years of our history curriculum and had also distinguished herself in rigorous independent research in another discipline entirely. I wanted to understand how she approached research, not as a theoretical exercise but as someone who has repeatedly demonstrated it at a very high level.
What she described was reinforcement of everything I’ve been reading over the past two years. She talked about becoming genuinely invested in a question and doing more research than she needed as the material pulled her forward. She described a process of using sources to find other sources - following footnotes, tracing citations, clicking through to the next article to see where a particular claim originated. “Once you come upon one good source or one good piece of information,” she told me, “the best way to find the next best thing is to use that source to find other sources.” It sounds simple, but it’s actually a sophisticated research disposition that many students never fully embrace.
Her organizational system was equally revealing. She would start with a general sense of her argument, create categories for her body paragraphs, and then research into those categories - color-coding primary source evidence, leaving herself analytical notes in the margins so she wouldn’t forget why a particular passage mattered weeks later. “You kind of need to catalog your thoughts a little bit if you’re working on something more long-term,” she said. She wrote her topic sentences last, because the structure emerged from the thinking, not the other way around. When I pointed out that she seemed to be outlining and writing her paper as she researched it, she agreed. “It’s extremely iterative,” she said.
This is the conversation that got me thinking about my father - here was an 18 year old instinctively developing the kind of skills necessary for deep, lasting learning - and then methodically categorizing her work so she could come back to it days or weeks later. Sure, as someone growing up with the internet and online source material, her tools were digital, but the thinking and overall approach was eerily similar to how my father dug into his case preparation 50 years earlier.
The Virtues of an Analog Process
I asked her what she thought about moving more of the research process to paper-based, handwritten formats. Her answer surprised me with its nuance. She acknowledged that a student working digitally without AI could probably produce a better final product than a student working entirely on paper. But then she added: “In terms of building skills, the final product of that research paper might not be as good, but more skills would be built by the student by doing it in an analog manner.”
When I asked about AI’s role in research more broadly, she was remarkably specific. The worst use, she said, was at the very beginning - for brainstorming and generating ideas - because “it can become difficult at a point to differentiate what’s your own original idea and what you’ve read through AI.” This is utterly at odds with much of what you read about using AI to brainstorm - and she is not the first student I’ve heard this from.
She didn’t think it should be used for outlining either. And when I asked whether learning to use AI was itself an important skill to develop, she was direct: “I don’t think using AI is as much of a skill to build as other things, like learning how to research and write by yourself. I think that is something that students can learn to do later in life. What’s difficult is to build those fundamental skills.”
This from a top student who has figured it out on her own. I’ve no doubt that if and when she integrates AI tools into her work routine she will continue to be successful. She’s already developed more authentic research skills than most of her peers - and many adults for that matter - but the critical distinction is her mindset. My conversation with her aligns with the gut feeling I’ve had for the past few years that the strongest students understand instinctively where AI harms them. They are committed to doing the hard work first because it works.
The Ongoing Debate
Unfortunately, the picture is not going to be as clean as “keep AI away from students until they’ve built the skills.” In our own department, I teach an advanced Independent Research in History course, with students enrolled who are heading to college in a matter of months, where I’ve slowly introduced AI tools for discrete stages of the process. The Deep Research models allow students to pinpoint relevant sources faster and with more specificity than ever before. NotebookLM and other digital tools with AI integration allow for a level of organization and file scanning that has already become the norm across industries.
I think that’s appropriate for students at that level, and I’d make the case that exposing advanced students to these tools before they encounter them unsupervised in college is part of our responsibility. But I’m honestly torn, because the line between “ready” and “not ready” is never going to be as bright as we’d like it to be. And the fact that one of our most advanced research students does not rely on AI for any stage of her process speaks for itself. If and when AI enters the picture, it should add value and never replace the core skills.
Disciplinary Differences
The picture gets even more complicated when you look across disciplines. In the science field where she also excels, the organizers behind the major competitions have released explicit guidelines permitting students to use AI for improving the phrasing and clarity of their writing. The use of AI is becoming accepted practice in scientific research more broadly, where communicating findings clearly is essential while writing has never been the primary skill being assessed.
The trend is clear: different disciplines are arriving at different answers, and the humanities’ emphasis on writing-as-thinking puts us in a fundamentally different position than fields where writing may be seen more as a delivery mechanism for other kinds of expertise.
What holds all of this together, I think, is that the hardest skills to build are the most human ones. To perform advanced research, you need judgment, patience, perseverance, and skepticism - which are far more difficult to measure.3 These are also prerequisites for developing AI fluency. Not every student will be ready for open-ended AI use on the same timetable, and not every discipline will draw the line in the same place. But the sequence matters. Independent thinking always has to come first.
What We Built
Every humanities department in every school is going to have to work through some version of this conversation, and not everyone will land in the same place. Some will be more conservative about AI use, others less. The key is being honest about the trade-offs - and then designing accordingly.
What our department ultimately landed on was less an “AI-proof” assignment and more an attempt to make the entire research process visible and documentable. We were thinking practically about how to shift the burden away from surveillance and toward structured evidence of student thinking at every stage. The redesign moved almost all of the cognitively critical work into the classroom and onto paper - handwritten worksheets, annotated hard-copy sources, in-class quizzes that require students to explain their reasoning rather than reproduce information. The grading structure shifted accordingly, with the final product counting for significantly less than the process checkpoints that precede it.
I don’t want to paint this as a finished product or a triumphant case study. We’re still working through real questions, and I suspect every school attempting something similar is navigating its own version of the same tensions. But what interests me most is a connection to something I first encountered in a world where generative AI wasn’t remotely part of the conversation.
Making Thinking Visible
Twice in the past twenty-five years - most recently in 2009 - I attended Project Zero, a week-long summer institute at the Harvard Graduate School of Education. Project Zero has been around for decades, and one of its central preoccupations is what they call “Visible Thinking” - a research-backed framework built around the idea that if you want students to think well, you have to make the thinking itself visible, not just the products that thinking is supposed to produce.
The approach emphasizes three core practices: thinking routines that structure how students engage with content, documentation of student thinking as it happens, and reflective professional practice among teachers. Project Zero is still going strong, and its framework is arguably more relevant now than at any point in its history.
What struck me as our department’s redesigned research paper took shape is that we had independently converged on almost exactly this framework - without everyone in the room having heard of it. The timeline quiz that asks students to defend their periodization choices is a thinking routine. The annotation requirements that force students to analyze primary sources through multiple lenses are documentation of thinking. The revision log that tracks changes in argument and evidence is a record of cognitive development over time. The one-on-one conference is a moment where a teacher can observe thinking in real time, rather than simply evaluate a product after the fact. Even the final reflection, in which students assess the quality of their own evidence, is a metacognitive exercise straight out of the Visible Thinking playbook.
We Can’t Avoid These Questions Any Longer
The pressure of AI forces us to answer a question that schools have been able to avoid for a very long time: if we can’t trust the product, what evidence of thinking do we actually have? For most traditional assessments, the honest answer was: not much. The research paper, as most of us experienced it in school and as many of us have been assigning for years, was always a product-focused exercise in which process was assumed rather than observed. AI has exposed just how fragile that assumption was.
Visible Thinking is not an AI strategy. It was coined years before anyone imagined a world where a chatbot could write a competent five-paragraph essay in 15 seconds. But it may be the most useful framework schools have for responding to this moment - not because it “solves” AI, but because it redirects the conversation from surveillance to pedagogy, from policing products to designing for thinking. Interestingly, in Project Zero’s current materials for this summer’s institute, I found virtually no mention of AI. It’s always been about thinking.
An Analog Approach in a Digital World
My father's filing system may have been from another era, but the practice of externalizing one's thinking - organizing, categorizing, making connections visible - is as old as the written word itself. He simply adapted it for the courtroom. Reflecting on his practice now connects to the Visible Thinking concept I didn’t have a name for until I sat in a lecture hall at the Harvard Graduate School of Education more than twenty years ago.
Every document organized, every connection ingrained, every piece of evidence coded and accessible because the thinking behind it was explicit and externalized. When opposing counsel saw those tabs, they weren’t intimidated by the system itself. They were intimidated because they could see that someone had already done the cognitive work of integrating thousands of pages into a coherent argument, and that the thinking behind that preparation meant every document could be retrieved at the moment it mattered most.
I know my father would not have stood in the way of AI advancing and improving efficiency for routine tasks in his office. AI will automate many kinds of knowledge work, much of which we’re still figuring out. But it doesn’t change the importance of visible thinking for the truly essential cognitive efforts that require human judgment, synthesis, and review. I know he would have resisted using AI for his own thinking.
And I, for one, would not have bet against him with just his hard copies, legal pad, ballpoint pen, and a highlighter.
Connect With Me
Beyond this newsletter, I work directly with schools, educators, and other institutions about pedagogically questions raised by AI. Take a look at my website and reach out - I’d love to hear what you’re working on.
Martindale-Hubbell is the legal industry’s oldest and most prestigious rating service, used for over 150 years to verify a lawyer’s ethical standards and professional ability.
Winners get an all-expenses paid trip to MIT to share their performance.
You can find a wonderful “Periodic Table of Critical Thinking” here at the One Percent Rule.






Thank you for sharing your wisdom and, as always, for wrapping it in such a beautiful story.
Great piece! I found it really thoughtful (as always), but I want to push back on one assumption:
‘I don’t think using AI is as much of a skill to build as other things, like learning how to research and write by yourself… What’s difficult is to build those fundamental skills.’
This assumes research is a ‘fundamental skill’ that exists independently of tools. But research is a material practice. When your student colour-codes sources and leaves marginal notes, that’s not making her thinking visible—that’s thinking itself. The annotations, the categories, the organisation are constitutive of thought, not representations of it.
Peter Damerow spent his career showing that knowledge production is always material—most people who dig into where knowledge comes from end up at his feet. You can’t learn to research 'by yourself,' only with some material configuration: notecards, highlighters, databases, citation managers. The skill develops through engagement with specific tools, not prior to them.
You don’t think around whiteboards, you think with them. Same for AI in research.
Research-as-practiced-with-notecards is actually a different cognitive activity than research-as-practiced-with-AI, not the same activity done with different aids. So the real question isn’t ’should students avoid AI to learn fundamentals first?’ It’s ‘which material practices enable the most generative thinking?’ There is a difference between using a calculator to skip long division and using AI to ’summarise’ a text you haven't read. In the former, you offload a routine task; in the latter, you offload the encounter with the material itself.
But this is where AI gets interesting: Boden showed that AI excels at combinational and exploratory creativity—rapidly testing frameworks, surfacing connections, enumerating possibilities. That’s valuable material practice. But transformational creativity—the fundamental reframing when points in your argument are stretched and inconsistent—still requires your engagement with the sources themselves. AI can help you stress-test categories; it can’t generate the new impressions that come from sustained contact with the material.
In this view, research isn’t a ‘fundamental skill’ we learn so we can use tools later; it is a skill that emerges from our friction with tools. When we tell students to avoid AI to learn ‘the fundamentals,’ we’re often just asking them to use 20th-century tools (highlighters/notecards) instead of 21st-century ones. A more productive debate might be: ‘Which tools provide the right kind of friction to spark transformational creativity, and which ones smooth over the thinking until it disappears?’
I can’t recommend Damerow highly enough here.