12 Comments
User's avatar
Peyton Cabanillas's avatar

Hi Mr. Fitz, this is Peyton. I really enjoyed reading this article because it brought up both sides of the argument when it comes to AI. While AI can be very helpful in getting sources or giving advice on grammar, I think the line of cheating vs not is passed when AI is used for idea generating. I think of that as the hardest part of writing, but also the part that helps us students grow and challenge ourselves the most.

Aryan Bansal's avatar

Hi Mr. Fitz! This is Aryan. I was really interested in the 'Gray Areas' of AI, which you mentioned in the article. I found it interesting because it made me think about how many students use AI today. Rarely do they ever use it to completely generate a piece of writing, but instead to edit, clarify, or proofread. This really makes the boundaries between cheating and not cheating very blurry, so I am curious to see how teachers will address this in the coming future.

Sarah Sichel-Outcalt's avatar

Hi Mr. Fitz! This is Sarah. I found your point about percentage of AI use to be very interesting! Sadly, this is a difficult dilemma that will only become more confusing as technology improves.

Stephen Fitzpatrick's avatar

Nice, Sarah! 2 points for you! Make sure you remind me!

Dr. Jeanne Beatrix Law's avatar

I always appreciate your reasoned consensus-based approach to discussion. It resonates with me :)

Stephen Fitzpatrick's avatar

Thanks. As I've said before, it's important to note other points of view and certainly Gary Marcus is a strong voice who tries to hold those in the industry who are pushing the hype accountable. For those of us without a strong tech background, it's hard to know what to make of all this, especially when there are equally distinguished voices on the other side. Predictions here are really tricky but the self-driving car example is a good one. I don't know if that ever gets solved to the point where people will trust it even if they can show it's, on average, better than a human driver. It's clear we won't tolerate mistakes with machines that we would otherwise with humans.

Richard Bush's avatar

Steve, I am grateful for the time and thought given to this essay. Thank you from a retired educator and teacher trainer.

Stephen Fitzpatrick's avatar

Thanks, Richard. It's certainly an interesting time to be teaching!

Robert Litan's avatar

Very well done, everyone is groping for right way to do this. But it is undeniable that AI is the future, it can't be ignored. And by the time your kids get to the job market, or well before, knowing how to get the best out of AI -- and most importantly checking on to catch hallucinations -- will be rewarded. Maybe that's part of the answer, getting students to source things for AI statements that are not already sourced. As part of an effort to distinguish crap from real sources on the Net.

Stephen Fitzpatrick's avatar

Thanks, Bob. The Deep Research models are already quite good at cutting down on hallucinations. The issue is less inaccuracy and more getting access to the higher quality materials. Once these models get access to paywalled databases, the quality will shoot through the roof. And I don't think this is in 10 or even 5 years but at most 2 or 3 years away. I've followed the space closely and the speed of improvements is exponential. Of course, something could happen to slow everything down, but the current administration is totally uninterested in regulation. It can be hard to separate the hype from the skeptics, but even more sober minded folks are leaning towards the scale of improvements. Unfortunately (though for understandable reasons), the conversation in schools right now is mired at the level of cheating.