Discussion about this post

User's avatar
Michael Burns's avatar

Thanks for sharing, Stephen. I appreciate what Trinity is trying to do with that talk. There's definitely a demand from teachers who are wondering, "But what do we do now?!" when it comes to AI in our classrooms and in the lives of our students.

Too often though those talks and webinars are full of tech-hype and not grounded in the day-to-day struggle we face as teachers. I go into those things hopeful for guidance, or at least some understanding of what we're dealing with, but honestly, I come out more often than not angry, frustrated, or demoralized. These folks have so many obvious blind spots. Does Kurzweil really not see any problems with someone (or maybe four men) being able to influence his grandchildren's thoughts and behavior on a brain-wave level?!

I'm actually trying to create an event for my school that would bridge that gap or at least be a more authentic experience for teachers. Who would be on your dream panel or speaker list?

Definitely on my dream panel is https://nitafarahany.substack.com/ She's mentioned in that Times article and here: https://www.nytimes.com/2025/11/14/magazine/neurotech-neuralink-rights-regulations.html She's doing fascinating work on the complications of brain interaction technology. Check her out if you don't already know her.

Expand full comment
PEG's avatar

This captures a frustration I recognize even without being there: educators asking urgent questions about now and getting answers about 2035.

But I wonder if the mismatch reveals something useful: we’re looking to the wrong people for answers. The futurists can’t help us because they’re not in classrooms wrestling with these questions daily.

You make an excellent point about needing frameworks rather than predictions. I’d add: we might learn more from looking at present algorithmic failures (Robodebt, UK Post Office Horizon scandal) than from speculating about AGI. Those cases show us the real problems—opacity, deferred human judgment, accountability gaps—that we’re already navigating with students.

The question about wisdom versus knowledge wasn’t really about 2040. It was about what education should cultivate now: judgment, ethical imagination, the capacity to question systems rather than just optimize within them. Those capacities matter whether students have ChatGPT or nanobots.

Maybe the practical path forward is less about predicting what AI will become and more about feeling our way through the transition—experimenting, documenting what works, sharing horizontally with other teachers. The people who figure this out won’t be the ones with the “right” framework. They’ll be the ones willing to try things, fail, learn, and share.

Expand full comment
14 more comments...

No posts

Ready for more?