Discussion about this post

User's avatar
John Warner's avatar

I can't help but note that all of your examples of "typical" assertions about AI are from the skeptic or critic side of the equation. Here's some from the other side of the equation.

"AI will revolutionize education."

"This is the worst LLM you will ever use."

"AI won't take your job, but someone else using AI will."

"In 10 years AI will replace doctors and teachers."

That first one is Sal Khan (and Bill Gates). The last one is Bill Gates.

I assume for the same reason the skeptic predictions are worthless, these are not as well, and yet I can't help but notice these sorts of declarations are much more likely to be uncritically accepted not just by AI enthusiasts, but the general public as well.

Notice that, in effect the last Bill Gates quote is the same claim as "AI will destroy white-collar jobs" and yet when Bill Gates says it this way, he's treated as a tech visionary whose view becomes the standard by which truth is judged.

One of the ways to help break through prediction fog is to evaluate the track records of the predictors. In 2011 Sal Khan declared that video (like his Khan Academy offerings) was going to "reinvent" education. He now says that tutor chatbots will "revolutionize" education. By your framework we should discount Khan's prediction of revolution because of its lack of precision, but I would add a dose of skepticism based on his track record.

Bill Gates also has proven to be the most wrong man in education over and over again. If we're looking for an authority on the effect of technology and education, I recommend Audrey Watters who clearly and cogently pointed out why Sal Khan was wrong in 2011 and why he's extremely likely to be wrong today. https://2ndbreakfast.audreywatters.com/12-years-and-60-minutes-later/

I'm also going to take issue with the probability folks like Mary Meeker because forecasting probabilities and saying you're updating your future probabilities makes you look flexible and thoughtful, but it's really a shell game and ultimately, isn't all that helpful when we consider what to do from a public policy perspective regarding this technology. We also see significant biases in how we treat predictive probabilities based on different cognitive frameworks.

The p(doom) score is a probability prediction of the likelihood that a super intelligent AI will kill us all. The CEO of Anthropic says his p(doom) is between 10 and 25 percent. Geoffrey Hinton, one of the godfathers of AI puts it at 10 percent.

We could not deny the domain expertise of these folks. They're the leading AI researchers/developers in the world. But we have to ask ourselves if someone truly believed what they are doing had a 10% chance of destroying humanity, why wouldn't you spend every minute of your life trying to stop that thing, rathe than developing it?

A probability framework allows you to be never wrong because as you get closer to the outcome you simply raise your probability. But saying something has a 35% chance of happening in the future today and then a 65% chance of happening in the future three years from now once we have more evidence tells us nothing particularly useful in the moment of the 35% prediction. A weather forecast for a week from now that says it isn't going to rain is "correct" at the time of the prediction, but then when it rains (something with an 85% probability on the day), we get to say both predictions are correct.

Or another example, the "likelihood of winning" meter that ESPN publishes online during a game. Duke had a 99% likelihood of winning their national semi-final game at one point. They lost. The 99% prediction is correct because it left that 1% room for error.

For AI and education, I think we should spend much less time thinking about the future and much more about the present. We know what kinds of experiences are meaningful to student learning today, right now. We should strive to provide students access to those experiences. Will some of them involve interacting with the latest technology? Of course!

But the idea that we have to jump in with both feet to secure student well-being in some as-yet-to-be realized AI future is not clear thinking.

Sorry for my logorrhea. You've given use much to consider.

Expand full comment
Bette A. Ludwig, PhD 🌱's avatar

Eventually, people will stop asking if AI wrote it.

They’ll start asking if it was useful, clear, worth their time.

Because mediocre is mediocre, whether by machine or human.

The real question is: did it make you think?

Expand full comment
5 more comments...

No posts