I can't help but note that all of your examples of "typical" assertions about AI are from the skeptic or critic side of the equation. Here's some from the other side of the equation.
"AI will revolutionize education."
"This is the worst LLM you will ever use."
"AI won't take your job, but someone else using AI will."
"In 10 years AI will replace doctors and teachers."
That first one is Sal Khan (and Bill Gates). The last one is Bill Gates.
I assume for the same reason the skeptic predictions are worthless, these are not as well, and yet I can't help but notice these sorts of declarations are much more likely to be uncritically accepted not just by AI enthusiasts, but the general public as well.
Notice that, in effect the last Bill Gates quote is the same claim as "AI will destroy white-collar jobs" and yet when Bill Gates says it this way, he's treated as a tech visionary whose view becomes the standard by which truth is judged.
One of the ways to help break through prediction fog is to evaluate the track records of the predictors. In 2011 Sal Khan declared that video (like his Khan Academy offerings) was going to "reinvent" education. He now says that tutor chatbots will "revolutionize" education. By your framework we should discount Khan's prediction of revolution because of its lack of precision, but I would add a dose of skepticism based on his track record.
Bill Gates also has proven to be the most wrong man in education over and over again. If we're looking for an authority on the effect of technology and education, I recommend Audrey Watters who clearly and cogently pointed out why Sal Khan was wrong in 2011 and why he's extremely likely to be wrong today. https://2ndbreakfast.audreywatters.com/12-years-and-60-minutes-later/
I'm also going to take issue with the probability folks like Mary Meeker because forecasting probabilities and saying you're updating your future probabilities makes you look flexible and thoughtful, but it's really a shell game and ultimately, isn't all that helpful when we consider what to do from a public policy perspective regarding this technology. We also see significant biases in how we treat predictive probabilities based on different cognitive frameworks.
The p(doom) score is a probability prediction of the likelihood that a super intelligent AI will kill us all. The CEO of Anthropic says his p(doom) is between 10 and 25 percent. Geoffrey Hinton, one of the godfathers of AI puts it at 10 percent.
We could not deny the domain expertise of these folks. They're the leading AI researchers/developers in the world. But we have to ask ourselves if someone truly believed what they are doing had a 10% chance of destroying humanity, why wouldn't you spend every minute of your life trying to stop that thing, rathe than developing it?
A probability framework allows you to be never wrong because as you get closer to the outcome you simply raise your probability. But saying something has a 35% chance of happening in the future today and then a 65% chance of happening in the future three years from now once we have more evidence tells us nothing particularly useful in the moment of the 35% prediction. A weather forecast for a week from now that says it isn't going to rain is "correct" at the time of the prediction, but then when it rains (something with an 85% probability on the day), we get to say both predictions are correct.
Or another example, the "likelihood of winning" meter that ESPN publishes online during a game. Duke had a 99% likelihood of winning their national semi-final game at one point. They lost. The 99% prediction is correct because it left that 1% room for error.
For AI and education, I think we should spend much less time thinking about the future and much more about the present. We know what kinds of experiences are meaningful to student learning today, right now. We should strive to provide students access to those experiences. Will some of them involve interacting with the latest technology? Of course!
But the idea that we have to jump in with both feet to secure student well-being in some as-yet-to-be realized AI future is not clear thinking.
Sorry for my logorrhea. You've given use much to consider.
All good points, especially about focusing on the present. Many of the predictions are more than worthless but, as you point out with folks like Bill Gates, accepted uncritically. I think I was clear that all of these kinds of statements - from whichever side - should be treated skeptically. There are lots of blithe pronouncements on both sides. But if you read the Superforecasting book, the percentage of reliability exercise is a useful one. As a classroom teacher, I am very wary of the ways in which the debate is skewed by the loudest voices but not necessarily the most informed. That conversation will need constant updating. We’re all trying to make sense of this moment. The doomed predictions are great examples - why isn’t the media taking those seriously? Regulation right now is a non-starter. All serious problems. Always appreciate your point of view. Lots of food for a thought.
“What does "destroy" mean exactly? Devastate by how much, measured how, and compared to what baseline?”
Why must we always use hyperbole when trying to make a point? We would garner so much more credibility if we communicated facts instead of attempting to trigger emotions.
Thanks. I've read the superforecasting book and I have to say I was decidedly meh in terms of its ultimate applicability. As a thought experiment it's interesting. As to what I'm supposed to do with those probabilities, I don't know, and I think it's particularly not helpful when it comes to education. My grade school teachers had no idea the internet was coming and yet they left me well-prepared for a world in which I have to work and communicate using the internet because of the foundational experiences they gave me in writing, thinking, and communicating with intention. You shared a link to a piece by a guy at Brookings this week that essentially argued to abandon those fundamentals for some kind of new framework of embracing AI as inevitably integral to writing. This is pretty nuts, totally speculative, and it sort of frightens me that people with system-level influence are thinking this way. Of course, this is not new. (Ex. Bill Gates.)
I think the past is much more helpful in considering the current moment as opposed to even attempting to predict an inevitably uncertain future. I think we're seeing, essentially, that the constructivists were correct and that behaviorism is a dead end. Unfortunately, we have schooling systems that are almost entirely organized around behaviorist principles. This is the present we have to be discussing. If we help students learn how to create knowledge for themselves, adapting to whatever the technology becomes will be a much easier thing. If we simply train them to be some kind of AI technician, should the capacities of AI develop as some predict, we're preparing them for their own future obviation.
I also should add that I've participated in annual forecasting competitions that have a mix of some serious (likelihood of a major air disaster) and some trivial (name 5 celebrities who will die), and they're fun, but they help revealed a couple of strong biases around forecasting. One is that you are inevitably going to be well above average in the mix of competitors if you simply predict a maintenance of the status quo. The competitions had formulas to award more points for correct "unlikely" events, but even with that, the status quo reigned in the rankings. For example, for deceased celebrities you're much better off simply picking five very old people, rather than thinking that some famous person who also has an apparent substance abuse issue may OD. (And yet a huge portion of competitors would attempt this.) The people who would win inevitably hit on some unlikely event, but it was also pretty clear that these hits were just as likely the result of a wild-ass-guess as any kind of deep research and reasoning.
I'm not saying there's no utility in it and considering future outcomes can be very helpful, but Tetlock's frame of percentages tends to be more misleading than illuminating, IMO.
Daniel Kokotajlo’s analysis lacks the causal rigor necessary for credible epistemic claims. His framing is speculative, not structural, and does not meet the threshold for intellectual seriousness. Gary Marcus, while often positioned as a skeptic, appears to operate with limited technical grounding in the systems he critiques. What both offer are editorial positions—op-eds masquerading as scientific evaluation—unsupported by formal modeling or grounded methodological insight.
I can't help but note that all of your examples of "typical" assertions about AI are from the skeptic or critic side of the equation. Here's some from the other side of the equation.
"AI will revolutionize education."
"This is the worst LLM you will ever use."
"AI won't take your job, but someone else using AI will."
"In 10 years AI will replace doctors and teachers."
That first one is Sal Khan (and Bill Gates). The last one is Bill Gates.
I assume for the same reason the skeptic predictions are worthless, these are not as well, and yet I can't help but notice these sorts of declarations are much more likely to be uncritically accepted not just by AI enthusiasts, but the general public as well.
Notice that, in effect the last Bill Gates quote is the same claim as "AI will destroy white-collar jobs" and yet when Bill Gates says it this way, he's treated as a tech visionary whose view becomes the standard by which truth is judged.
One of the ways to help break through prediction fog is to evaluate the track records of the predictors. In 2011 Sal Khan declared that video (like his Khan Academy offerings) was going to "reinvent" education. He now says that tutor chatbots will "revolutionize" education. By your framework we should discount Khan's prediction of revolution because of its lack of precision, but I would add a dose of skepticism based on his track record.
Bill Gates also has proven to be the most wrong man in education over and over again. If we're looking for an authority on the effect of technology and education, I recommend Audrey Watters who clearly and cogently pointed out why Sal Khan was wrong in 2011 and why he's extremely likely to be wrong today. https://2ndbreakfast.audreywatters.com/12-years-and-60-minutes-later/
I'm also going to take issue with the probability folks like Mary Meeker because forecasting probabilities and saying you're updating your future probabilities makes you look flexible and thoughtful, but it's really a shell game and ultimately, isn't all that helpful when we consider what to do from a public policy perspective regarding this technology. We also see significant biases in how we treat predictive probabilities based on different cognitive frameworks.
The p(doom) score is a probability prediction of the likelihood that a super intelligent AI will kill us all. The CEO of Anthropic says his p(doom) is between 10 and 25 percent. Geoffrey Hinton, one of the godfathers of AI puts it at 10 percent.
We could not deny the domain expertise of these folks. They're the leading AI researchers/developers in the world. But we have to ask ourselves if someone truly believed what they are doing had a 10% chance of destroying humanity, why wouldn't you spend every minute of your life trying to stop that thing, rathe than developing it?
A probability framework allows you to be never wrong because as you get closer to the outcome you simply raise your probability. But saying something has a 35% chance of happening in the future today and then a 65% chance of happening in the future three years from now once we have more evidence tells us nothing particularly useful in the moment of the 35% prediction. A weather forecast for a week from now that says it isn't going to rain is "correct" at the time of the prediction, but then when it rains (something with an 85% probability on the day), we get to say both predictions are correct.
Or another example, the "likelihood of winning" meter that ESPN publishes online during a game. Duke had a 99% likelihood of winning their national semi-final game at one point. They lost. The 99% prediction is correct because it left that 1% room for error.
For AI and education, I think we should spend much less time thinking about the future and much more about the present. We know what kinds of experiences are meaningful to student learning today, right now. We should strive to provide students access to those experiences. Will some of them involve interacting with the latest technology? Of course!
But the idea that we have to jump in with both feet to secure student well-being in some as-yet-to-be realized AI future is not clear thinking.
Sorry for my logorrhea. You've given use much to consider.
All good points, especially about focusing on the present. Many of the predictions are more than worthless but, as you point out with folks like Bill Gates, accepted uncritically. I think I was clear that all of these kinds of statements - from whichever side - should be treated skeptically. There are lots of blithe pronouncements on both sides. But if you read the Superforecasting book, the percentage of reliability exercise is a useful one. As a classroom teacher, I am very wary of the ways in which the debate is skewed by the loudest voices but not necessarily the most informed. That conversation will need constant updating. We’re all trying to make sense of this moment. The doomed predictions are great examples - why isn’t the media taking those seriously? Regulation right now is a non-starter. All serious problems. Always appreciate your point of view. Lots of food for a thought.
“What does "destroy" mean exactly? Devastate by how much, measured how, and compared to what baseline?”
Why must we always use hyperbole when trying to make a point? We would garner so much more credibility if we communicated facts instead of attempting to trigger emotions.
Thanks. I've read the superforecasting book and I have to say I was decidedly meh in terms of its ultimate applicability. As a thought experiment it's interesting. As to what I'm supposed to do with those probabilities, I don't know, and I think it's particularly not helpful when it comes to education. My grade school teachers had no idea the internet was coming and yet they left me well-prepared for a world in which I have to work and communicate using the internet because of the foundational experiences they gave me in writing, thinking, and communicating with intention. You shared a link to a piece by a guy at Brookings this week that essentially argued to abandon those fundamentals for some kind of new framework of embracing AI as inevitably integral to writing. This is pretty nuts, totally speculative, and it sort of frightens me that people with system-level influence are thinking this way. Of course, this is not new. (Ex. Bill Gates.)
I think the past is much more helpful in considering the current moment as opposed to even attempting to predict an inevitably uncertain future. I think we're seeing, essentially, that the constructivists were correct and that behaviorism is a dead end. Unfortunately, we have schooling systems that are almost entirely organized around behaviorist principles. This is the present we have to be discussing. If we help students learn how to create knowledge for themselves, adapting to whatever the technology becomes will be a much easier thing. If we simply train them to be some kind of AI technician, should the capacities of AI develop as some predict, we're preparing them for their own future obviation.
I also should add that I've participated in annual forecasting competitions that have a mix of some serious (likelihood of a major air disaster) and some trivial (name 5 celebrities who will die), and they're fun, but they help revealed a couple of strong biases around forecasting. One is that you are inevitably going to be well above average in the mix of competitors if you simply predict a maintenance of the status quo. The competitions had formulas to award more points for correct "unlikely" events, but even with that, the status quo reigned in the rankings. For example, for deceased celebrities you're much better off simply picking five very old people, rather than thinking that some famous person who also has an apparent substance abuse issue may OD. (And yet a huge portion of competitors would attempt this.) The people who would win inevitably hit on some unlikely event, but it was also pretty clear that these hits were just as likely the result of a wild-ass-guess as any kind of deep research and reasoning.
I'm not saying there's no utility in it and considering future outcomes can be very helpful, but Tetlock's frame of percentages tends to be more misleading than illuminating, IMO.
Eventually, people will stop asking if AI wrote it.
They’ll start asking if it was useful, clear, worth their time.
Because mediocre is mediocre, whether by machine or human.
The real question is: did it make you think?
Daniel Kokotajlo’s analysis lacks the causal rigor necessary for credible epistemic claims. His framing is speculative, not structural, and does not meet the threshold for intellectual seriousness. Gary Marcus, while often positioned as a skeptic, appears to operate with limited technical grounding in the systems he critiques. What both offer are editorial positions—op-eds masquerading as scientific evaluation—unsupported by formal modeling or grounded methodological insight.