Three AI "Truths" I Can’t Ignore
How Silicon Valley's Oligarchy, Advanced AI Models, and Vulnerabilities in American High Schools are Reshaping Our Future
As someone who consumes a lot of AI news, three fundamental truths have become increasingly clear to me: first, a handful of tech companies are dictating our AI future with minimal oversight; second, despite the skepticism, current AI models are already powerful enough to cause major disruption regardless of whether or not we achieve AGI anytime soon; and third, American high schools are unprepared for the challenges that are already here and the challenges which will go far beyond concerns about cheating. I want to examine each of these claims in turn if only to help me clarify my understanding of how to navigate AI reporting going forward. Perhaps these are obvious, but I welcome objections to these assumptions. I consider them to be first principles of our present moment.
One of the most confounding issues I face when I read about AI is how to sort the news into different storylines. There are the constant barrages of hyped clickbait headlines - the latest models, coolest new features, and boundless optimism for AI productivity, workflow, and innovation. Within the Substack community there is a lot more nuanced skepticism, not just around the tech itself, but its implications for humanity and learning, often tinged with a healthy dose of corporate disdain and distrust. And then there is the mainstream media which typically focuses on the lowest hanging fruit as we’ve seen with recent stories about AI cheating.
I found myself thinking about each of these threads while listening to the absolutely fascinating interview between Ross Douthat and Daniel Kokotajlo1 on the podcast, Interesting Times.
An Interview with the Herald of the Apocalypse
As I listened to and reflected on this interview, the three themes mentioned above began to take shape. To state them more clearly:
We are at the mercy of a handful of tech companies and CEOs whose vision and goals for the future are vastly different from the majority of the world;
AGI and SuperAGI are shiny and distracting concepts. Powerful AI systems are already here, they are only likely to get better, and they are going to have a massive impact on society.
At the high school level, current conversations around Large Language models and cheating and writing are, while important, largely missing the point. Significant pedagogical assumptions are going to need to be reconfigured around learning, assessment, grading, and knowledge production.
Unelected AI Dictators
The Douthat-Kokotajlo interview reveals something rarely discussed so openly: the power dynamics at the highest levels of AI development. Consider this exchange:
Douthat: Just to go back to the idea of the person who’s at the top of one of these companies being in this unique world-historical position to basically be the person who controls superintelligence — or thinks they control it, at least: You used to work at OpenAI, which is a company on the cutting edge, obviously, of artificial intelligence … And you quit because you lost confidence that the company would behave responsibly in a scenario, I assume, like the one in “AI 2027.”
Kokotajlo: That’s right.
Douthat: So from your perspective, what do the people who are pushing us fastest into this race expect at the end of it? Are they hoping for a best-case scenario? Are they imagining themselves engaged in a once-in-a-millennium power game that ends with them as world dictator? What do you think is the psychology of the leadership of A.I. research right now?
Kokotajlo: Well, um. [Breathes deeply.]
Douthat: Be honest.
Kokotajlo: It’s — [laughs] it’s — you know, caveat, caveat. I can’t ——
Douthat: We’re not talking about any single individual here. You’re making a generalization.
Kokotajlo: Yeah, yeah. Caveat, caveat. It’s hard to tell what they really think because you shouldn’t take their words at face value.
Douthat: Much, much like a superintelligent A.I.
Kokotajlo: Sure. But in terms of — I can at least say that the sorts of things that we’ve just been talking about have been discussed internally at the highest level of these companies for years.
For example, according to some of the emails that surfaced in the recent court cases with OpenAI, Ilya, Sam, Greg and Elon were all arguing about who gets to control the company. And at least the claim was that they founded the company because they didn’t want there to be an A.G.I. dictatorship under Demis Hassabis, who was the leader of DeepMind. So they’ve been discussing this whole dictatorship possibility for a decade or so at least.
(NY Times, An Interview with the Herald of the Apocalypse) (emphasis added).
This revelation is remarkably telling: years before ChatGPT became a household name, OpenAI's founders were already engaged in power struggles over who would control what they believed would become the most powerful technology in human history. They weren't just building a company; they were positioning themselves to control what they saw as inevitable AGI development - primarily to prevent competitors like DeepMind's CEO from getting there first.
The implications are profound and infuriating: a handful of tech executives, not elected officials or public institutions, are making decisions that will shape humanity's technological future.
Karen Hao's recent Atlantic piece, adapted from her forthcoming book Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, further exposes these power dynamics. Her reporting on the November 2023 boardroom coup when Altman was briefly ousted from OpenAI, reveals how internal tensions over AI safety protocols and its pace of development had been building for years:
But by the middle of 2023 - around the time he began speaking more regularly about the idea of a bunker—Sutskever [Ilya who is referred to in the above quote from the Times interview] was no longer just preoccupied by the possible cataclysmic shifts of AGI and superintelligence, according to sources familiar with his thinking. He was consumed by another anxiety: the erosion of his faith that OpenAI could even keep up its technical advancements to reach AGI, or bear that responsibility with Altman as its leader. Sutskever felt Altman’s pattern of behavior was undermining the two pillars of OpenAI’s mission, the sources said: It was slowing down research progress and eroding any chance at making sound AI-safety decisions.
OpenAI's evolution from a non-profit to a more typical corporation has coincided with what appears to be diminishing concern for safety protocols. Just a few weeks ago, their botched release of an overly sycophantic ChatGPT model demonstrated their prioritization of speed over testing. This "move fast and break things" approach, which has come under increasing criticism industry wide, especially in the wake of the negative effects of social media, seems especially reckless for what OpenAI itself considers potentially world-altering technology.
Simultaneously, effective government oversight appears increasingly unlikely. House Republicans recently attached a rider to their signature tax bill that would ban states and local governments from regulating AI for a decade. This is a clear signal that even as advances in powerful AI accelerate, meaningful regulation will not be prioritized by the current or future administrations.
Powerful AI Is Already Here: Beyond the AGI Distraction
While media coverage obsesses on the "Race to AGI," with endless debates about whether and when we might achieve human-level artificial intelligence (as in yesterday’s Times article "Why We're Unlikely to Get Artificial General Intelligence Anytime Soon"), this framing neglects that the existing AI systems are already transforming our world.
Notably, while Cade Metz makes sound arguments about AGI's difficulties, he doesn't address Kokotajlo’s central claim that breakthroughs in AI's coding abilities could create an accelerating feedback loop that dramatically speeds development.
This brings me to the concept of "competency deniers" - a term I first encountered in Marc Watkins and Monroe's (2025) work describing those who dismiss AI capabilities based on cherry-picked examples of poor performance.2
Substack is filled with pieces attacking AI at the competency level, focusing on hallucinations or awkward writing, with some commenters still declaring the technology fundamentally useless or merely a “word prediction generator” or comparisons to the calculator.
This position is increasingly difficult to maintain. ChatGPT alone now serves 500 million weekly users - a scale which suggests practical and substantial utility, even accounting for students using it to complete assignments or marketers generating “AI slop.” And that doesn’t even include the dozens of other AI platforms used by millions on a daily basis.
More importantly, critics who focus solely on written output miss AI's most powerful emerging capabilities beyond text generation. Consider tools like NotebookLM, which can synthesize hundreds of academic papers to extract insights and connections. Or Deep Research, which scours the web to produce comprehensive reports on specialized topics. As a non-programmer, I've used Claude to generate code that reorganized my entire Google Drive filing system through simple natural language prompts. The newest agentic AI systems like Manus can handle complex sequences of tasks across multiple domains.
These abilities represent the real trajectory of AI development - systems that augment human cognitive strengths across multiple modalities - text, image, video, code, sound, and web hosting. Those fixated on critiquing today's models will find themselves continually moving the goalposts as those limitations disappear and more and more AI’s engage in reasoning, planning, and the autonomous execution of tasks.
Indeed, Eric Schmidt makes this point emphatically in his recent Ted Talk.
I want to be clear. Acknowledging AI's capabilities doesn't mean dismissing legitimate concerns about its impact on jobs, skills, and human purpose. Criticizing the billionaires driving AI development and their largely unchecked power remains essential.
But we must separate critiques of corporate power from assessments of technological capability. The reality is that current AI systems are already incredibly powerful, and their capabilities are improving at a pace that few see slowing down anytime soon.
The issue currently before us is not whether AI is any good. The question we have to confront is how we are going to respond as a society to what increasingly looks like rapid AI acceleration and disruption across virtually every industry. This truth leads me to my third point about education.
Education’s Closing Window of Opportunity
A recent investigation by 404 Media, based on extensive public records requests across multiple school districts, revealed the extent to which American high schools were blindsided by ChatGPT. Their reporting showed administrators scrambling to understand and respond to a technology that had already become commonplace among their students. Schools were bombarded with notifications about ChatGPT, with varying reactions of confusion, curiosity, and concern. Districts had no coherent policies until well after the technology was in widespread use, creating a patchwork of reactive measures. Most still don’t have thoughtful strategies about how to handle AI.
The 2024-2025 academic year may ultimately be remembered as the one when high schools finally realized they could no longer avoid the AI conversation. Whether driven by an uptick in student adoption, faculty demands, or simply the mounting pressure of AI news coverage, secondary schools that previously remained on the sidelines now find themselves scrambling to adapt. The consequences of this delayed reaction are already apparent: student facility with AI is rapidly outpacing faculty understanding, creating an asymmetry that threatens the ability of schools to catch up.
Meghan McArdle addressed this perception gap a few weeks ago:
… What are we going to do about AI?
When I ask people this question, the most common response is a blank stare or a shrug. Oh, tech people understand what’s coming — in fact, they understand it’s already here, displacing early-career programmers. CEOs are studying how artificial intelligence might help reduce expensive head counts. Professors are aware that students are using it to cheat. Journalists have a natural wariness of any entity that produces faster, cleaner copy than they can.
Outside of those professions, I keep being surprised by the number of successful people who tell me they think AI won’t matter for their industry. Usually, they played around with it a while ago and weren’t particularly impressed, so they stopped using it and don’t know how much it has improved. This leads them to assume it won’t be a threat — at least not on any time frame that will affect them personally.
"If you haven't been worrying about AI, it's time to start preparing," The Washington Post, May 6, 2025.
Even with this wake up call, there is little appetite on behalf of an exhausted, overwhelmed, and stressed out teaching profession to learn another skill. Some reject that they even need to. Others want to ban it from their classrooms. While this mindset - that AI might eventually matter but isn't urgent - has dominated my conversations with educators, it is slowly starting to thaw.
McArdle goes on to observe:
What that group fails to appreciate is that the AI development cycle is faster than any technology we’ve ever seen. Saying you played around with it a year ago and weren’t impressed is like judging this year’s Tesla models based on having studied a Ford Model T. As Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School, noted on X the other day, even if AI development plateaued at the level of the current models, “we would have a decade of major changes across entire professions & industries (medicine, law, education, coding …) as we figure out how to actually use it. AI disruption is baked in.”
No one I’ve spoken to in the industry seems to think AI will plateau where it is now.
Skeptics will find contradictory opinions, of course. But for anyone closely following AI advancements since late 2022, the trajectory is unmistakable: capabilities have advanced dramatically, and barring some unforeseen technical obstacle (which Kokotajlo openly acknowledges could happen), will continue to do so at what is likely to be an unprecedented pace over the next 3-5 years.
Five years - a typical school district or corporate planning cycle - is an eternity in AI development. The combination of messianic tech pronouncements, minimal regulation, astronomical investment levels, and geopolitical competition with China has created the conditions for an AI "Manhattan Project." Whether or not this results in true AGI (which is somewhat beside the point), extremely powerful AI will nevertheless reshape society regardless of whether educational or other business institutions are ready.
With respect to schools, the track record on nimble and flexible change isn’t great.
The RAIL (Responsible AI in Learning) framework is an accreditation-like system endorsed by bodies like the Middle States Association that provides K-12 institutions with guidance for ethically implementing AI. During my experience with their training, I was surprised by how RAIL immediately challenged the common "integration" narrative that dominates most AI education discussions.
Instead of asking how to integrate AI into existing models, RAIL proposes that educators fundamentally reconsider what constitutes powerful learning. They maintain that the proper stance in the age of AI is to “reimagine” the purpose of education itself and reject the language of “integration.”
“AI will accelerate, automate, and scale traditional, broken, methods of instruction.”
Dr. Philippa Hardman, University of Cambridge
This perspective - that if AI is simply integrated into current systems it might amplify rather than fix fundamental educational flaws - is mostly ignored by the few teachers who are even thinking about AI use in schools. Almost all the educational AI tools marketed to secondary schools emphasize worksheet and test creation prompts, feedback on student writing, and traditional lesson planning guides. RAIL’s recommendation for a true overhaul in pedagogical thinking is simply too disruptive to existing professional identities and institutions.
Schools’ historic ability to reconsider their structure, organization, and purpose is dismal to say the least. I’m sure others can point to examples where schools have evolved in some areas or experimented with new trends and methods over time, but as someone who has been teaching for 30 years, the traditional model of a teacher in a classroom leading a group of students is still the norm.
With respect to technology, while the internet, laptops, Learning Management Systems, and other advanced delivery methods have altered the landscape, the typical process where teachers assign work to be completed by students and evaluated by the instructor has been around for more than a century.
Unlike colleges and universities where academic departments might have some autonomy to experiment with new approaches, high schools face additional constraints from standardized testing requirements, strict curricular mandates and budgetary considerations, as well as heightened parental scrutiny. These factors make the AI adaptation challenge particularly acute at the secondary level.3
Banning the technology from classrooms outright, as some are advocating, or hoping it somehow goes away, are both forms of wishful thinking bordering on denial.
Given the institutional inertia against pedagogical “reimagining,” high school teachers face an existential question: Is the traditional educational model fundamentally incompatible with generative AI? Or can we somehow find an equilibrium where AI use is effectively absorbed into existing systems? Answering this question is likely to be the central problem faced by high schools over the next 10 years.
Based on what I’ve seen since early 2023, I am skeptical whether schools can answer it effectively.4
The Three Truths
As I reflect on these three “truths” - corporate-driven AI development at odds with societal values, the power of current AI systems, and secondary education's woeful unpreparedness - I'm struck by how each reinforces one another. The utopian visions of billionaire tech leaders like Sam Altman and Elon Musk drive rapid AI progress, while educational institutions struggle to keep pace with increasingly advanced tools they didn’t ask for or know was coming and are already undermining how students think and work.
When Kokotajlo speaks of potential superintelligence by 2027, it's easy to dismiss his claims as alarmist hype. Yet his track record of accurate predictions gives me pause. It’s also important to realize that he hopes he is wrong. He left OpenAI precisely because he was concerned about what they were doing. His project is a warning not a blueprint. He has no incentive to exaggerate.
But whether or not his timeline proves accurate is really not the point. Generative AI tools are improving monthly, guided primarily by commercial interests rather than public good, while our schools remain mostly unequipped for rapid change and flexibility.
My experience in the classroom this year confirms that students are already living in a different technological reality than many of their teachers recognize. They intuitively understand AI's potential in ways that educational policy hasn't begun to address. The gap between institutional understanding and technological reality widens with each passing week. Perhaps the summer will give some schools a chance to pause and breathe and play catch up, but, again, the task ahead is daunting.
What's most concerning isn't whether AGI arrives in two years or twenty – it's that we're making so little progress in addressing how AI has already reshaped education and society right now that it’s hard to imagine what happens if something even close to AGI actually comes to fruition. The three truths I've outlined aren't predictions about a distant future but observations about our present moment.
As I wind down the 2025 school year, my hope isn't for definitive answers. It’s for more pointed questions and measurable direct action. Rather than debating whether AI is "good" or "bad" for education, we need to figure out how to preserve meaningful learning in a world where cognitive tasks will be increasingly automated and students will continue to take shortcuts unless we offer them an alternative. Rather than fixating on AGI timelines, we should question who benefits most from the current AI development schedule and how to ensure broader representation in those decisions. Right now, the relentless pursuit of AGI by the oligarchs are driving the entire AI agenda, timeline, and discussion. That needs to change.
The window for thoughtful adaptation in schools may be closing, but it hasn't shut yet. That opening is where my focus remains as both an educator and observer of this unique technological moment.
I wrote about AI 2027 last month - Kokotajlo is one of the main authors. In the interview he is serious, deliberate, articulate, and thoughtful which gives the written responses a little more heft. It’s worth a listen even if you think it’s overwrought hype.
Watkins, M., & Monroe. (2025). Introduction: Pedagogical Crossroads: Higher Education in the Age of Generative AI. Generative AI’s Impact on Education: Writing, Collaboration, & Critical Assessment.
The relevant passage: “As scholarly debates rage about GenAI, we are unconvinced by one particular camp: the AI competency deniers. Meta’s Yann LeCun argues, “we’re never going to reach anything close to human-level intelligence by just training on text. It’s just not going to happen” (2024). Other naysayers focus on hallucinations and biases as evidence that GenAI is fundamentally flawed. Gary Marcus writes, “We have no concrete reason, other than sheer technoptimism, for thinking…any given [GenAI] system will be honest, harmless, or helpful, rather than sycophantic, dishonest, toxic or biased” (2023). Indeed, while some of the most creative and invigorating writing about AI comes from AI skeptics, their predictions about impending limitations have thus far been proven wrong. Even if AI progress stops today, the current capabilities of these systems are quite extraordinary, consequential, and difficult to deny.”
Some districts are leaning in: How Miami Schools Are Leading 100,000 Students Into the A.I. Future. It will be interesting to follow what happens, especially without the corresponding RAIL framework.
I hope I'm wrong. I've seen one local school that has thoughtfully embraced AI in a unique way. While it's too early to determine the success of their experiment, their approach has been deliberately methodical rather than reactive. Meanwhile, most high schools remain in denial, either demonizing the technology or wasting resources on ineffective AI policies. Though I don't claim to have definitive answers, I've become increasingly convinced that outright bans are neither viable solutions nor practically enforceable.
Fully agree with your assessment of where education is at. I’m definitely experiencing this as a teacher. The problem is indeed, in the UK at least, the life-impacting high stakes exams we are forced to teach “to” : the GCSEs at 15/16 and the A levels at 18. These cannot be changed by schools, nor can we opt out of these.
Until those shift, we are largely stuck with integrating AI as best we can DESPITE the immovability of the Dept for Education which ultimately dictates the system.
There is little political will to completely rethink education. Much like climate change: the science and the evidence couldn’t be more obvious, but no meaningful action is taken.
With how quickly our society is changing and unable to keep up with this pace, I think it’s time that we begin to ask these questions not of other humans, but of AI itself. After all, AI has been trained on all the literature and writings of human society on the Internet. By focusing our questions not to each other but to AI perhaps it will find remixed answers that we have not seen before. And will help us to keep up with the rapidly changing pace that AI itself is causing.