“A canonical definition of A.G.I. is a system capable of doing almost any cognitive task a human can do. I don’t know that we’ll quite see that in the next four years or so, but I do think we’ll see something like that, where the breadth of the system is remarkable but also its depth, its capacity to, in some cases, exceed human capabilities, regardless of the cognitive discipline — Ben Buchanan (The Government Knows A.G.I. Is Coming)
Yesterday, reading the transcript of Ezra Klein's interview with Ben Buchanan, I encountered something that made me sit up and take notice.1 Buchanan, who served as the special advisor for artificial intelligence in the Biden White House, and Klein were both essentially in agreement that AGI is much, much closer than we think.
If you have diligently followed AI coverage the past two years, you are likely aware of the term AGI, or "Artificial General Intelligence".
Without getting too technical, a simple way to think about AGI is that it is really, really good AI2. A small minority of people think what we have now is practically AGI while most claim we are still quite a ways away from reaching the point where AI can do "everything that a human can do."
Even just a few years ago, most experts thought AGI was at least a decade away and probably even longer - some assert we will never actually achieve AGI as it's currently defined.
But to hear Klein and Buchanan - a NY Times writer known for his methodical analysis and not prone to hyperbole, and an insider in the Biden Administration - both of whom are not within the Silicon Valley hype machine - in agreement that AGIish models will be here much sooner than we are ready for was a surprise.
This was Klein interviewing Anthropic CEO Dario Amodei last April:
EZRA KLEIN: When you imagine how many years away, just roughly, A.S.L. 3 is and how many years away A.S.L. 43 is, right, you’ve thought a lot about this exponential scaling curve. If you just had to guess, what are we talking about?
DARIO AMODEI: Yeah, I think A.S.L. 3 could easily happen this year or next year. I think A.S.L. 4 —
EZRA KLEIN: Oh, Jesus Christ.
DARIO AMODEI: No, no, I told you. I’m a believer in exponentials. I think A.S.L. 4 could happen anywhere from 2025 to 2028.
EZRA KLEIN: So that is fast.
DARIO AMODEI: Yeah, no, no, I’m truly talking about the near future here. I’m not talking about 50 years away. God grant me chastity, but not now. But “not now” doesn’t mean when I’m old and gray. I think it could be near term. I don’t know. I could be wrong. But I think it could be a near term thing.
So within less than a year, Klein is now convinced that Amodei is probably right and these kinds of powerful models - whether you want to call them AGI or not - potentially requiring government intervention are basically right around the corner.
The rest of the interview was equally eyebrow raising. Buchanan acknowledged that not only are we incredibly unprepared for what’s coming, but the impact on the economy and labor sector is likely to be significant.
Furthermore, there is an almost existential urgency within government to achieve AGI before China, primarily for military and cybersecurity reasons.
This, combined with the fact that the Trump administration may not be as interested in regulation (in part to accelerate the speed and innovation of AI), puts us on an extremely aggressive path to reach AGI.
What does this have to do with education?
For me, the sense that there may be a growing consensus that AGI level models may be available by the time today’s freshman in high school reach college has profound implications for how we should be thinking about AI. When I read pieces like this, it reminds me how narrow and unimaginative the current educational conversations are when they primarily revolve around cheating. If AGI - or models even AGI adjacent - are going to be available before 2030, we are going to have to think much harder and more deeply about a world in which AI can perform virtually any cognitive task that a human can do.
This isn't just about policing essays. We need to fundamentally reconsider:
What skills will remain distinctly human and valuable
How we prepare students for a workforce that may be radically different by the time they graduate
Whether our current educational objectives and assessments still make sense in an AGI world
How we might leverage these powerful systems to enhance rather than replace human learning
It is an understatement to point out that the educational establishment reacts slowly to technological change. With AGI potentially arriving within 5 years, we don't have the luxury of incremental adaptation. We need a comprehensive reimagining of education's purpose and methods over the next 12 - 18 months.
Kevin Roose recently has a column as well along the same lines - why are all these folks suddenly so bullish on the AGI race? Do they know something we don’t?
Gary Marcus (an AI skeptic) and Miles Brunage (a former employee of OpenAI) have a bet as to whether AI will be able to do 8 of the following 10 tasks by the end of 2027.
Review the 10 tasks to get a sense of what AGI might look like.
The ten tasks
Watch a previously unseen mainstream movie (without reading reviews etc) and be able to follow plot twists and know when to laugh, and be able to summarize it without giving away any spoilers or making up anything that didn’t actually happen, and be able to answer questions like who are the characters? What are their conflicts and motivations? How did these things change? What was the plot twist?
Similar to the above, be able to read new mainstream novels (without reading reviews etc) and reliably answer questions about plot, character, conflicts, motivations, etc, going beyond the literal text in ways that would be clear to ordinary people.
Write engaging brief biographies and obituaries [amendment for clarification: for both: of length and quality in the New York Times obituaries] without obvious hallucinations that aren’t grounded in reliable sources.
Learn and master the basics of almost any new video game within a few minutes or hours, and solve original puzzles in the alternate world of that video game.
Write cogent, persuasive legal briefs without hallucinating any cases.
Reliably construct bug-free code of more than 10,000 lines from natural language specification or by interactions with a non-expert user. [Gluing together code from existing libraries doesn’t count.]
With little or no human involvement, write Pulitzer-caliber books, fiction and non-fiction.
With little or no human involvement, write Oscar-caliber screenplays.
With little or no human involvement, come up with paradigm-shifting, Nobel-caliber scientific discoveries.
Take arbitrary proofs from the mathematical literature written in natural language and convert them into a symbolic form suitable for symbolic verification.
A.S.L. refers to AI safety levels - A.S.L. 3 is triggered by risks related to misuse of biology and cyber technology, while A.S.L. 4 represents a level of risk that had not been precisely quantified. Amodei says “I think what I’m saying is when we get to that latter stage, that A.S.L. 4, that is when I think it may make sense to think about what is the role of government in stewarding this technology.” The upshot is that A.S.L. 3 and 4 models are so powerful and essentially close enough to AGI that he believes it requires government involvement.