AI Whiplash: Every Breakthrough Comes With a Disaster
The tech keeps getting better. The companies building it keep getting worse. Educators are stuck in the middle.
What Happened These Past Few Weeks
When I was younger, I was a huge sports fan — baseball, football, basketball, tennis, golf - you name it and I watched it. I loved following the standings, memorizing stats, and going to games. This was the late ’70s and ’80s, before sports became the multi-billion-dollar juggernaut it is today, with player salaries in the stratosphere and an entire media ecosystem focused on everything but what’s happening on the field.
But by the 2010s, my interest started to fade. Coverage is now dominated by trades and drafts, player malfeasance, licensing deals, and salary cap drama. Fantasy sports and online gambling - both of which I’ve managed to avoid - have taken over, warping the fan experience for someone like me. These days, the games almost feel like an afterthought.
My evolution from sports fan to sports cynic keeps coming to mind as I try to keep up with the pace of AI news. These past few weeks crystallize the problem.
The Good, the Bad, and the Ugly
Last Thursday, Claude released Skills - an exciting and useful new tool that lets users create reusable instructions for complex tasks. Instead of investing time in more and more detailed prompts, you can now teach Claude a workflow in plain English. Not only does it remember the Skill, but it creates a portable markdown file that can be transferred among different LLMs and applies it automatically when needed. As one reviewer put it, Skills “shrink the gap between an idea and a working agent,” making automation something anyone can build, not just engineers.
Anthropic’s quiet launch of Skills entered an AI news cycle which saw, just a few weeks ago, OpenAI’s powerful new video generator, Sora 2, immediately lose control of the narrative. With insufficient guardrails in place, users quickly flooded the internet with deepfakes of dead celebrities, cartoon parodies, and every kind of ethically questionable content you’d expect from an unmoderated generative model.
The scarier takeaway was that they almost certainly knew this would happen but did it anyway. Viral memes were far more important to them than bad headlines or legal liability.
Then, on October 14th, Sam Altman announced OpenAI’s plans to release an adult chatbot - a decision as tone-deaf and misaligned with current concerns about teen mental health and tech addiction as one could imagine.
Meanwhile, over at Perplexity, a chatbot many educators have seen as a smarter, more trustworthy alternative to ChatGPT, the marketing for its new agentic browser actively encourages cheating. (Credit to Marc Watkins for documenting this so clearly.)
For Perplexity to tolerate - or actually promote - videos encouraging academic dishonesty forces us to wonder what the end game is for AI companies when it comes to education. They know a huge number of their users are students, many of whom are already using AI to bypass coursework. Stories like this, and even the introduction of features like Study Mode, suggest these companies may aim to undermine the current model of higher education entirely.
While these AI headlines may have sailed past the majority of the country fixated on other news stories, what they all have in common is the contradictions and confusion of the current AI moment. Claude’s Skills, OpenAI’s Sora 2, and Perplexity’s Comet, each represent further steps forward in AI abilities - and more nightmares for teachers - but only one of the three companies escaped controversy with its rollout.
Why This Won’t Slow Down
Anyone who works with children (or is just interested in the AI conversation) is rightfully outraged by the behavior of OpenAI and Perplexity. But what’s even more revealing is how little public resistance there is from people in power. Where are the elected officials, regulators, or prominent public intellectuals calling this out? Other than a handful of Substack writers inside a few education circles, the silence is deafening.
The reality is that less than a half dozen companies with deep pockets are burning through billions while racing for the existential jackpot of AGI and the trillion-dollar market share that comes with it. They will not police themselves. The incentives don’t allow for caution, reflection, or responsibility. Everything we’ve been warned about - from copyright violations to the risks of sycophancy to educational disruption and worse - is being brushed aside in favor of speed and scale.
These recent headlines evoke images of a demolition derby to see who can do the most damage the fastest.
This demolition derby isn’t through. And it’s creating an impossible situation for anyone trying to make sense of where AI is headed.
Making Sense of the Chaos
The news over the past few weeks underscores the challenges of not just staying on top of new product releases, but the difficulty of even following the narrative. As the tools continue to get more powerful, the companies building them are becoming harder to trust by the day.
How are we supposed to have an intelligent conversation about AI when every moment of progress is overshadowed by the horrific judgment of its creators?
We’re in Two Universes
Depending on which corner of the internet you inhabit, any given AI news week sounds completely different.
If you read tech blogs and business substacks:
“Claude Skills is a game changer! Agentic AI is finally here. Time to rethink everything.”
If you read education Substacks:
“OpenAI is building adult bots while kids cheat their way through school using Perplexity’s new AI browser. Burn it all down.”
Both of these takes are real. Both happened over the past few weeks.
Like the rest of the country, we are no longer debating two competing ideologies, but living entirely separate realities, each unfolding at the same time.
Meanwhile, In the Classroom
I’m just a classroom teacher.
I’ve seen students build entire websites over a weekend using AI models. I’ve spent a lot of time working with these tools - ChatGPT, Claude, NotebookLM - and much of what they can do is legitimately impressive. These platforms have helped me research, plan, create, and teach in ways that would’ve been impossible three years ago.
Students are using Deep Research tools to unearth sources in minutes that in past years would have taken days. As long as we guide and help them understand how to use, compare, and vet those sources, AI can be a net benefit.
But as student AI use has skyrocketed, I’m also seeing educators overwhelmed and increasingly disillusioned. I’m watching some teachers either throw up their hands or, more often, retreat back into business as usual, because no one seems to be steering this ship.
Even AI Insiders Can’t Agree
Regardless of where we’re headed, we’ve still got a bigger problem: even among the AI elite, there’s no shared language for what’s actually happening or what might happen next.
Can’t We All Just Get Along?
Some within the AI industry recognize the public needs to be heard. Jack Clark, a co-founder of Anthropic, recently gave a talk where he argued:
Right now, I feel that our best shot at getting this right is to go and tell far more people beyond these venues what we’re worried about. And then ask them how they feel, listen, and compose some policy solution out of it.
Clark’s instinct is right. We do need broader input.
But educators have been speaking out for two years about concerns ranging from academic integrity to mental health. Despite clear public unease with AI’s pace, the feedback loop seems broken. Companies aren’t listening, or if they are, they’re not changing course.
If the industry won’t listen to reasonable concerns, some voices are calling for far stronger action.
An AI Apocalypse?
On another end of the spectrum is Eliezer Yudkowsky, who recently went on Ezra Klein’s podcast to discuss his new book, If Anyone Builds It, Everyone Dies. His title is not hyperbole but a literal description of what he believes will happen if AI continues to advance without extreme intervention.
He’s not some anonymous crank. He’s one of the earliest voices warning about AI risk. He’s also nearly impossible to follow. His logic is dense, his worldview alien, and his conclusions are difficult to engage with in any productive way, certainly by the larger public not as attuned to the finer points of the AI debate.
If You Build It, They Will Come …
Then there’s Yann LeCun, chief AI scientist at Meta, and perhaps the most influential AI accelerationist on the planet. For teachers watching companies market cheating tools to students, LeCun’s faith that ‘benefits will outweigh risks’ feels dangerously abstract. He also vehemently disagrees with Yudkowsky and frequently positions himself as the voice of rationality in a sea of hysteria, calling the idea that AI poses an existential threat to the human race ‘preposterous.’
LeCun is a true believer - not just in the tech, but in the human judgment behind it. He insists that open research is essential and that too much caution will stifle innovation.
What’s the Average Teacher Supposed to Think?
All three figures claim to care about AI safety, but their definitions don’t align. Moreover, none of their frameworks help educators make decisions about whether to allow ChatGPT in their classrooms.
What they mean by “safety” - and how urgently they think it’s needed - doesn’t seem to match the actual news coming out with each new pronouncement. Based on their actions, profit, market share, and power seem to be the priority.
For educators and citizens trying to make sense of what’s coming, these conflicting signals refracted through legacy media outlets with varying degrees of accuracy, are mystifying. You have one camp urging calm deliberation, another sounding the extinction alarm, and a third insisting the “alarm” is actually the real danger.
Beyond the Binary: What A Third Conversation Requires
The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.
F. Scott Fitzgerald
One of the core problems is that very few people are willing to acknowledge, let alone grapple with, both AI realities at once.
Enthusiasts like LeCun, who are generally the most experienced, hands-on users, tend to keep their heads down and focus on what’s cool, new, and productive. Most of these voices come from the tech world. They’re deep into prompting, workflows, and model comparisons. They’d rather not talk about hallucinations, copyright violations, environmental damage, mental health consequences, or the overall corporate agenda. It’s inconvenient. It kills the vibe.
But avoiding those issues doesn’t make the problems go away. It just means that glowing descriptions of each new AI tool have to pretend these improvements aren’t happening in the shadow of major corporate dysfunction.
Meanwhile, AI’s critics rarely acknowledge that the tech is doing things it absolutely couldn’t do three years ago. Their pieces often read like confident takedowns of AI as a con, full of righteous certainty and billionaire-bashing, but with little to no engagement or recognition about what these tools are capable of, where they are headed, and how much they are already embedded in society at large.
It’s becoming almost impossible for most people to engage in one AI conversation without rejecting the other.
Is a Third Conversation Possible?
Maybe. But only if both sides are willing to understand where the other is coming from.
The skeptics need to stop treating anyone who finds value in AI as naïve or complicit. Curiosity is not blanket acceptance. Most people using these tools are trying to do their jobs better, understand the stakes, and learn about the world the next generation is heading into.
At the same time, the AI enthusiasts need to be clear-eyed about who’s building this future. A small number of billionaires are shaping the technological infrastructure of daily life with little transparency, oversight, or accountability. To ignore that, or to brush off public concern as “fear-mongering,” is dishonest. Enthusiasts need to recognize and understand that choosing to “opt out” of AI is more than just an ethical stance. It’s a principled and understandable reaction to the current moment.
Both things are true: AI is powerful and useful and the people building it have done so with breathtaking irresponsibility. Reasonable people can make reasonably different choices around AI.
What we need now is a third conversation grounded in intellectual humility. The truth is that we are only at the beginning of the messy realities of a transformative technology going through rapid changes.
Schools Can’t Afford to Stay Stuck
To date, the companies building these tools have shown very little regard for the concerns of teachers. Since 2022, they’ve destabilized schools with no plan for how to rebuild. Too many administrators have turned a blind eye or passively adopted the latest edtech pitch, skipping over the much harder first-principles thinking that any serious pedagogical shift demands.
Turning curricular decision making over to companies is a bad bet. Only schools can decide what kind of learning they value and what kind of citizens they hope to shape, and only then ask whether AI has any business being part of that.
What Schools Should Be Asking
Schools need to make more informed choices.
The “is AI inevitable” trope in education is the wrong one. It’s already here. It’s in the ether - through student use - and in the infrastructure, through tools many schools have already adopted.
Instead of asking:
“Which AI platform should we pay for?”
We should be asking:
“Will AI help or hinder what we want students to learn and be able to do?”
Instead of debating whether to block or ban tools like ChatGPT, we need to address:
What kind of thinking do we want to cultivate and how do we support it in a world where AI will be everywhere?
Instead of trying to AI-proof the same assignments, we should be thinking about what new forms of thinking and creation we might ask from students.
AI is not one thing. It’s a fast-evolving cluster of capabilities, each with different implications for different disciplines, grade levels, and skills. Some can help with learning. Some undermine it. Some offer new creative possibilities. Others reinforce bias or cut corners.
Treating all of that as a single debate is lazy. And making decisions before we’ve clarified what we actually value is dangerous. We need to ask hard questions about what we want before deciding whether AI supports those goals. And that takes time we are slowly running out of.
Lean Into the Discomfort of Being Wrong
Entering a third conversation means being willing to be wrong. Being open to what others are seeing that you might be missing. Asking whether your opinions are built on evidence or just the version of the future you hope is true.
A year ago, I was more optimistic that AI could help teachers do their jobs better and improve learning outcomes for students. I still believe that it’s possible long term. But not with the same conviction I had in 2024.
And it’s not just because of the tech companies. Too many institutions are still bouncing between panic and denial. The paralysis, while understandable in 2023, is becoming inexcusable as we approach 2026.
AI in classrooms may not be inevitable but it isn’t going away and it can’t be wished into oblivion. Students are already marching by us.
Maybe I’ll be wrong, but it won’t be because I wasn’t willing to examine the questions with eyes wide open and a mind willing to be changed.




I was really disheartened to hear about OpenAI going the adult content route. It feels like a huge mistake. I can easily see this triggering backlash from state legislatures, school boards, and education leaders, which could lead to bans or restrictions on using OpenAI tools altogether.
It seems like a risky and short-sighted move. Now it’s going to sit in the same category as Meta’s LLM and Grok, which is just unfortunate.
And did you see that Sam Altman interview where he said scary things are going to happen. What is wrong with these tech guys? I honestly don’t get it.
I appreciate that you mention that a way out of this requires a willingness to be wrong. Adults (like students) need to remember that being wrong is a gift. When we are wrong, we have an opportunity to learn.
On the other hand, one of the worst things in life is to be wrong and to not know until it is too late.
Let's hope we don't find ourselves in the palm of that hand.