If you want an intriguing read going into next week, take a crack at Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean’s AI 2027 which I just read after discovering it through Kevin Roose’s new column in the NY Times.
Kokotaijlo (a former researcher at OpenAI) and his colleagues with the AI Futures Project created a dystopian scenario culminating in the arrival of artificial superintelligence which, in their minds, could come as early as 2027.
In about 7,500 words, the fictional narrative is part sci-fi and part tech-babble, with the AI hype turned up to 11. The website sure is cool.
Their major argument is predicated on AI first achieving the status of a "supercoder" sometime later this year, which then sets about coding future supercoders, until AI creates a tighter feedback loop leading to more accelerated and powerful breakthroughs, ultimately achieving superintelligence by 2027.
Or at least that’s the gist. I confess I could not follow all of it entirely (it’s written in a story type format with a lot of jargon and technical terminology throughout, including a fictional company and an AI arms race with China), but the key insight is that incredibly powerful AI is allegedly just two years away. The mere fact that such a document exists serves as a testament to where we are at this moment.
I don’t know what to do with these kinds of predictions. On the one hand, they are fun to read. But Daniel Kokotajlo et. al. are dead serious and clearly want others to take them seriously as well. Roose appears to be a victim of his own hype given that he published something a few weeks ago pushing the narrative that AGI (Artificial General Intelligence) was much, much closer than we think.
I hesitate to make analogies and I’m sure there are many reasons why this one may not be applicable, but I am constantly reminded of the Y2K media frenzy leading up to 12:00am on January 1st, 2000. The New York Times, Wall Street Journal, and many major news magazines ran stories predicting that Y2K would lead to major disasters.
And, of course, we all know how that went.
But what if they’re right?
Or even close to right.
In 2021, a year before the debut of ChatGPT to the general public, Kokotajlo predicted this in a blog post:
2022
… The chatbots are fun to talk to but erratic and ultimately considered shallow by intellectuals. They aren’t particularly useful for anything super important, though there are a few applications. At any rate people are willing to pay for them since it’s fun.
…
The bureaucracies/apps available in 2022 aren’t really that useful yet, but lots of stuff seems to be on the horizon. Thanks to the multimodal pre-training and the fine-tuning, the models of 2022 make GPT-3 look like GPT-1. The hype is building.
2023
The multimodal transformers are now even bigger; the biggest are about half a trillion parameters, costing hundreds of millions of dollars to train, and a whole year, and sucking up a significant fraction of the chip output of NVIDIA etc.[4] It’s looking hard to scale up bigger than this, though of course many smart people are working on the problem.
The hype is insane now. Everyone is talking about how these things have common sense understanding (Or do they? Lots of bitter thinkpieces arguing the opposite) and how AI assistants and companions are just around the corner. It’s like self-driving cars and drone delivery all over again.
The AI risk community has shorter timelines now, with almost half thinking some sort of point-of-no-return will probably happen by 2030….
2024
We don’t see anything substantially bigger. Corps spend their money fine-tuning and distilling and playing around with their models, rather than training new or bigger ones. (So, the most compute spent on a single training run is something like 5x10^25 FLOPs.)
Some of the apps that didn’t work last year start working this year. But the hype begins to fade as the unrealistic expectations from 2022-2023 fail to materialize. We have chatbots that are fun to talk to, at least for a certain userbase, but that userbase is mostly captured already and so the growth rate has slowed.
Sound familiar? It’s remarkably prescient - Kokotajlo was able to essentially outline, in fairly accurate terms, the developments which occurred in 2022 and 2023 and then seemingly plateaued sometime last summer.
AI Hype Returns in Force
For many educators, the 2024 plateau offered a brief moment to catch our breath and tentatively begin to adapt to the first wave of AI tools in the classroom. Since then, the hype has returned with a vengeance over the past 6 months, partially due to some legitimate new tools and abilities that have broken through to the mainstream (eg., NotebookLM, Deep Research and reasoning models, including the debut of DeepSeek, and OpenAI’s 4o image generation to name a few) and the foregrounding of the conversation around AGI, no doubt fueled in part by the tech companies themselves to justify the enormous expenditures being made.
For those wanting a deeper dive, lesser-known (to the general public) but nevertheless influential publications like Dario Amodei's Machines of Loving Grace and Leopold Aschenbrenner's Situational Awareness offer valuable insights into what AI insiders are thinking. Both are must-reads for anyone following AI developments closely.2 But, like the AI 2027 project itself, both documents are confounding for the lay reader to know whether to take seriously.
Unlike with Y2K, there is not a set date when the hype over AGI will be seen as overwrought. Clearly, if we reach 2030 and not much progress has been made in that direction, the AGI cheerleaders will be declared “wrong.” More likely, we will continue to make incremental progress, with AI getting better and better each year, and experts arguing over exactly what superintelligence means. We will keep having conversations about what comes next while most people fail to keep up with what’s happening in the present.
Advanced Coding: The Dark Horse when it comes to AGI?
But if Kokotajlo and his team at the A.I. Futures Project are right, then we had better prepare for a very different future in which human intelligence takes a back seat to advanced AI. As educators, this will continue to pose fundamental questions about what we should be teaching and how.
One major insight I took away from AI 2027 was to not get bogged down in the quality of AI writing - which understandably dominates conversations among educators - and instead scan for articles covering developments in AI coding. My understanding of the piece is that, once AI coding abilities surpass the ability of humans, that is the potential breakthrough needed to ramp up AI quality exponentially.3
The AI 2027 report clearly underscores this point. Throughout the narrative, the authors highlight how AI systems progressively become better at coding—from enhanced versions of tools like GitHub's Copilot to fully autonomous systems that can design and deploy complex software without human intervention. This creates a powerful feedback loop where AI systems build better versions of themselves at an accelerating pace. This progression in coding capabilities serves as the critical inflection point in their timeline, suggesting we should be watching coding advancement milestones, not consumer AI applications, for early warnings of superintelligence.
For schools and teachers, this suggests we should be thinking beyond just "How do we handle AI-written essays?" to "How do we prepare students for a world where coding itself might be primarily an AI domain?" and what that might mean for everything else connected to AI. Continuing to emphasis irreplaceable traits like human judgment, critical thinking, ethical reasoning, and creative imagination may be the best place around which to design our assessments and curriculum.
One thing is for sure - we can’t pretend we weren’t warned.
His predictions continue through 2025 and 2026 but I just included these given how well they seemed to capture the trends over the past few years. Read the whole post if you’re interested. I don’t think I cherry picked the most accurate information though he did predict things that did not come to pass and his predictions for 2025 and 2026 in this post are eclipsed or fine-tuned by the AI 2027 report. My point is simply he and his group have a track record of accurate predictions.
Of course, you can find enough AI experts who are equally skeptical that anything close to AI superintelligence or AGI is coming anytime soon, not to mention nitpicking about what those terms even mean. Just look at Gary Marcus’s Substack for a deeply informed AI expert not on the AGI train.
I have no idea if this is true or accurate or even whether it’s even possible. But I suspect that the mainstream media will not focus on this aspect of the story.
It was a thought-provoking article, without a doubt. Thanks for sharing your thoughts about it.