What Brookings Gets Right About AI in Schools - And Where It Falls Short
The Evidence Is In. Now What?
Ultimately, we find that at this point in its trajectory, the risks of utilizing AI in education overshadow its benefits.
Burns, Mary, et al. A New Direction for Students in an AI World: Prosper, Prepare, Protect. Brookings Institution, Jan. 2026.
Earlier this month, Brookings released what may be the most comprehensive report to date on AI in K-12 education. The headline finding won't surprise most teachers: they are living the reality every day.
I can't do justice to 164 pages in a single post - readers should explore the full document themselves. But after a year of writing about this topic, the report confirms what many of us have been observing on the ground: AI's educational benefits remain largely theoretical while the harms are already here.
What the Report Covers
The Brookings study is serious scholarly work: 505 participants across 50 countries, over 400 research articles reviewed, and a Delphi panel of expert consultations. Unlike most AI-in-education coverage on Substack, which remains stubbornly US-centric, this report draws on international perspectives that enrich the analysis.
The report asks two questions: What risks does AI pose to children’s education? And what can we do now to prevent them while capturing the benefits?
The potential benefits and risks outlined below will be familiar to anyone following this conversation:
But the executive summary delivers the news most teachers already know: the risks currently overshadow the benefits.
Why? Because, as the authors note, “the risks of AI differ in nature from its benefits—that is, these risks undermine children’s foundational development” (p. 12). Benefits are additive - when they appear, they enhance what already works. Risks are foundational because they threaten the very capacities students need to benefit from education in the first place. You can’t achieve skill “enhancement” until the foundation is stable and, at least right now, no one has demonstrated how to build a stable one for kids using AI. Simply put, the younger the user, the greater the risk.
What Surprised Me
97% of Teachers Report Using AI … And Enthusiastically
An internal Brookings survey of 303 teachers in the U.S. and India revealed that only 3% reported non-use of AI.
This genuinely surprised me, and may surprise readers here, where my sense is the algorithm tends to surface voices skeptical of AI adoption. I’ve spent much of the past year arguing that teachers are behind the curve and that the gap between student and teacher fluency represents a structural problem. The 3% non-use figure complicates that narrative.
The sample is small and limited to teachers from just two countries, so I'm wary of overgeneralizing. But if it reflects broader trends, teacher adoption may have accelerated alongside student use over the past year. That would at least address the literacy gap, even if we don't know how proficient most teachers actually are.
From the report:
Teachers report being enthusiastic AI users, particularly of LLMs, lesson plan generators, and student learning platforms... Twenty-seven percent described themselves as “beginning to explore” AI, 37% reported “trying a few things,” 34% indicated “regularly integrating” AI into their practice, 24% reported creating specific AI activities, and 17% described training or supporting teacher colleagues in AI use. (p. 38)
The usage categories overlap, but the picture is one of teachers starting to experiment more broadly. The report also notes that 78% of teacher responses indicated using AI primarily for productivity: generating parent emails, grading and feedback, translating materials, and creating worksheets and lesson plans. A UK study found teachers using ChatGPT for lesson prep saved 31% of their planning time without sacrificing quality. A Gallup survey put the savings even higher, nearly six hours per week. (p. 39)
So what explains the 97%? It could be capitulation, acceptance of inevitability, or encouragement (or requirement) from administrators. Probably some combination of each. But if anything close to that figure holds for American K-12 teachers, it represents significant movement in the past year, and suggests the teacher-student gap may be narrower than I assumed based on what I’d been reading elsewhere.
What Didn’t Surprise Me
Students Know Exactly What They Are Doing
The report includes a striking visualization comparing how different stakeholder groups rank AI’s risks. Look at the chart.
Parents and teachers are essentially aligned on cognitive undermining as the primary risk - 46% and 44% respectively. No surprise there. But students? Sixty-five percent identified cognitive undermining as the primary risk - the highest of any group by far.
For me, this was the most interesting and validating finding in the entire study.
Students know that when they use AI in certain ways, it's not helping them learn. They're not deluded. They're not confused about what constitutes genuine understanding versus outsourced thinking. They know they're taking shortcuts, and they know those shortcuts come at a cost.
This confirms what I've been arguing: students are more self-aware than we give them credit for. The problem isn't that they fail to understand the risks. The problem is that schools keep assigning tasks students would rather offload, and we've given them no compelling reason to do otherwise.
Meanwhile, experts assigned only 18% to cognitive undermining. Why so low? They’re thinking about a different threat. They assigned 27% to safety concerns, compared to just 3% from parents and 6% from teachers.
Experts are clearly attuned to safety risks that parents and teachers aren’t tracking. But when it comes to understanding what AI actually does to learning, students are the experts. And at 65%, they’re telling us the problem is bigger than the adults realize.
Adults and Children Use AI Differently
The report is direct on a point that needs to be constantly underscored:
A professional using ChatGPT experiences different cognitive demands than a secondary school student using ChatGPT. Professionals are harnessing AI’s enormous productive capacity to optimize work that they often already know how to do, accelerating processes they have mastered from years of repeated professional practice and reflection. They are therefore more likely to use AI as a cognitive partner. For students, the situation is fundamentally reversed. They are not mini-professionals. Their brains are developing, undergoing crucial processes of neural pruning and strengthening that depend on repeated cognitive effort and struggle. They lack the metacognitive skills, critical thinking abilities, and neurobiological maturity of adults. (emphasis added). (p. 57)
This distinction matters enormously, and is routinely overlooked. When adults defend AI use by pointing to their own productivity gains, they're describing a fundamentally different relationship with the technology. Adults have already developed the cognitive capacities that AI now assists. Students are supposed to be developing those capacities through the very work they're outsourcing. Assuming students will use AI the way adults do is perhaps the biggest mistake we make when we bring it into the classroom.
The report puts it simply: “Because humans have evolved to cognitively offload, students naturally take shortcuts when given the opportunity.” (p. 58).
Developing brains, limited executive function, and natural impulsivity mean that when a shortcut exists, students will take it.
One of the hardest conversations ahead will be whether to teach students how to use AI effectively or to insist on predominantly AI-free spaces in classrooms, at least for students under 16. The contradiction is obvious: whatever schools decide, students will have access to AI in every other aspect of their lives.
Where the Report Falls Short
The report concludes with twelve recommendations organized around three pillars: Prosper, Prepare, Protect. They’re thoughtful, and largely abstract.1
Recommendation 1 is entitled “Shift Educational Experiences in Schools.” The authors correctly diagnose the problem:
Many of the harms identified here—particularly the risks to students’ learning—originate largely from attempting to overlay transformative technology onto educational structures that have, at their core, remained largely unchanged since the late nineteenth century. (p. 129).
They're right. The fundamental structure of schooling (teachers assigning work, students completing it independently, teachers evaluating the output) has remained intact for over a century despite countless reform movements. AI didn't create this problem. AI simply exposed it.
The report then offers a telling admission:
While AI should not drive educational change, it lays bare weaknesses in current systems and provides education systems with a strong motivation to reform their purposes and processes. (p. 129).
Here’s where I part ways with the authors. If educational structures have remained largely unchanged despite decades of urgent calls for reform, what makes anyone think change will occur now? The report says AI should not drive educational change. But it offers no alternative driver.
Brookings isn't alone in this critique. The RAIL (Responsible AI in Learning) framework (which I wrote about in this post) similarly calls for schools to "reimagine" the purpose of education rather than simply "integrate" AI into existing structures. Easier said than done. The prescription assumes a capacity for institutional transformation that the modern U.S. school system has never demonstrated.
One Recommendation Over Three Years
The report asks stakeholders to "identify at least one recommendation to advance over the next three years." This is either admirable realism about how slow institutions move or an implicit acknowledgment that the recommendations are aspirational rather than actionable.
Just one? Over three years? Given the frankness of their opening message, the advice feels far less ambitious than necessary - and three years is an eternity at the current pace of technological change.
Based on what I'm hearing from teachers and seeing with AI-infused edtech marketing, most AI usage in schools still centers on worksheet and test creation, feedback on student writing, and traditional lesson planning. The transformative pedagogical reimagining that Brookings and RAIL call for feels alien to teachers and institutions simply trying to keep their heads above water.
Without significant leadership, resources, or meaningful outside pressure, mustering the political will to accomplish even the first and most fundamental recommendation is going to be extraordinarily difficult. What I'm seeing in most places doesn't give me confidence that schools will meet this moment. And lurking behind the Brookings framework is an even harder question the report doesn't ask: whether the goal is to reform existing educational structures or to prepare students for a world in which those structures may no longer make sense.
Where This Leaves Us
The Brookings report is valuable precisely because it confirms what classroom practitioners have been observing for three years: AI’s benefits in education remain theoretical while its harms are concrete and immediate. Students know they’re undermining their own learning, and so do their teachers. However, the kinds of structural changes necessary to actually address the problem remain as elusive as they’ve ever been.
The report does document successes, often in parts of the world that don’t get much attention. In Afghanistan, where the Taliban has banned women from post-primary education, their School of Leadership has used AI to digitize curriculum and deliver lessons via WhatsApp. For isolated girls and young women, chatbots have become both tutors and emotional lifelines (p. 35).
But for American schools, which have resisted meaningful pedagogical transformation for over a century, what force could possibly compel that transformation now?
The authors say AI should not be that force. I’m not sure what else could be.
Connect With Me
Beyond this newsletter, I work directly with schools, educators, and organizations navigating AI integration. Take a look at my website and reach out - I’d love to hear what you’re working on.
The twelve recommendations are organized under three umbrella headings: PROSPER (1-4), PREPARE (5-8), and PROTECT (9-12). The full list appears below.
PROSPER
1. Shift Educational Experiences in Schools
2. Co-Create Educational AI Tools with Educators, Students, Parents, and Communities
3. Use AI Tools That Teach, Not Tell
4. Conduct Research on Children’s Learning and Development in an AI World
PREPARE
5. Promote Holistic AI Literacy for Students, Teachers, Parents, and Education Leaders
6. Prepare Teachers to Teach With and Through AI
7. Provide a Clear Vision for Ethical AI Use That Centers Human Agency
8. Employ Innovative Financing Strategies to Close the AI Divide
PROTECT
9. Design Child-Safe and Trustworthy AI Tools
10. Strengthen Governance to Safeguard Children and Ensure Equity
11. Model Healthy Technology Use in Homes and Schools
12. Support Emotional Well-being and Social Connection in an AI World






I’ve been told by AP students that learning isn’t the point; a high GPA is. When I ask students, “What is school for?,” the most common response is “To get a good job.” I’m hardly a Marxist, but if the only purpose of education is viewed—by all “stakeholders,” to use that insulting parlance—as churning out producers of capital, then we have lost the thread on AI before the question is even asked.
I love this line: "when it comes to understanding what AI actually does to learning, students are the experts. And at 65%, they’re telling us the problem is bigger than the adults realize."
Lifting up student voice in this conversation is so important-- thank you for doing so.