The Other Student AI Crisis
Mental Health, Not Homework, May Be the Issue that Forces Schools to Act

The media’s focus on AI has shifted from academic concerns to its impact on mental health. With chatbots becoming more persuasive, emotionally responsive, and widely used, these risks - especially for teenagers - are becoming harder to ignore. A recent episode of the NY Times’ Daily, “Trapped in a ChatGPT Spiral”, raises the increasingly salient question: What happens when people become so enmeshed with AI systems that they lose their grip on reality? Why are some users so susceptible to being sucked into an AI vortex, privileging digital advice over human connection? While the loneliness epidemic and other social factors offer part of the explanation, it’s also clear that certain features baked into the way LLMs function are at the heart of the problem. And this has been entirely predictable since before the release of ChatGPT in November 2022. If schools won’t act on AI due to cheating, they will be forced to if tragedy strikes.
This isn’t New - We Were Warned
Months before ChatGPT debuted to the public, in June of 2022, Blake Lemoine was fired from Google for going public with his concerns that LaMDA, the Google chatbot he was working on at the time, was sentient.
Less than a year later, in February, 2023, Kevin Roose of the NY Times reported on his unsettling experience talking to Microsoft Bing, in which the chatbot tried to get him to leave his marriage.
These examples, from several years ago, highlight how even experienced and tech-savvy professionals were shaken by their initial interactions with AI chatbots. Even more telling, both incidents involved significantly weaker models than what’s available today.
In other words, the researchers building these systems - and the journalists covering them - knew long before ChatGPT became mainstream just how persuasive, relentless, and disturbingly human-sounding AI was becoming. We’ve long since blown past the Turing test. It wasn’t hard for companies to foresee how vulnerable some users would be once these tools were unleashed, suddenly and without warning, on the public.
Is it any wonder that many kids struggle to distinguish between the advice of an AI chatbot and that of a real human?
From Search to Interpretation
Most of us grew up viewing computers as tools. Determining truth was never really an issue. Outputs, to the extent they could be elicited at all, were neutral, unfiltered, and generally unbiased. Even in the early days of internet search, we were the ones deciding what was relevant and what wasn’t.
Search engines returned results, but didn’t attempt to interpret them. Our interactions with computers gave us information, not meaning. The results were algorithmically ranked, but the content itself was untouched. You had to click a link to determine what you were looking at and then evaluate it yourself.
Remarkably, we may already be feeling nostalgic for that experience.
Today, when we interact directly with a chatbot, all that friction is gone. If you use Google, AI answers are now returned almost entirely stripped of context. Source links may appear off to the side or as footnotes, but the confident tone of the response often inhibits users from digging deeper. New habits of information reliance are forming and new skills are now required to equip students for an AI-native search world.
Current elementary and middle schoolers will never remember the internet as it existed before Google’s AI overviews. And if they treat ChatGPT like a search engine, as most kids now do, their entire research experience may be confined within a chatbot’s interface.
Whether or not chatbots remain the dominant way we interact with AI, the reality right now is that millions of students, either through school-sanctioned tools or on their own, are learning to express themselves directly to LLMs, receiving authoritative, human-sounding responses in return.
To Anthropomorphize or Not?
Ethan Mollick argues in his book Co-Intelligence that everyone should treat AI as if it were human, while constantly reminding themselves that it’s decidedly not. The challenge is that holding both ideas in your head at once is no easy task, especially for those who don’t really understand how AI works to begin with.
It’s astonishing how often people, even those that should know better, slip into describing AI as if it were sentient. Recent coverage of OpenAI’s efforts to curb AI “deceptions,” for instance, repeatedly attributes agency and intent to a tool that has neither.
Part of the problem is that we don’t really know what’s happening in the space between AI input and output. One phrase I’m reminded of, borrowed from Stephen Covey’s The 7 Habits of Highly Effective People is “the gap between stimulus and response.”
What’s actually happening in that moment after we hit the return key (the stimulus) and AI begins unfurling its response?
The short answer is that we don’t fully know. Even with the rise of so-called “reasoning” models - a term still vigorously contested - much of what happens under the hood remains a mystery, even to many AI creators themselves. What we do know is that these systems are trained on trillions of words and rely on detecting patterns to predict the next likely token. Despite reams of technical documentation, even seasoned experts admit there’s still no clear explanation for why certain answers emerge the way they do.
No Ghost in the Machine
AI isn’t concerned with consistency, sincerity, truthfulness, or any of the qualities we associate with human thought. It can’t be. It’s not a person and it has nothing resembling consciousness.
And yet, we assign it those traits almost automatically. It’s what humans do. We project emotion and intention onto our pets, cars, and even our possessions. Of course we’re going to do it with a machine that talks back in full sentences.
If there’s one thing I would tell any student (or anyone new to using AI) it’s this: the machine doesn’t know you. It doesn’t care who’s on the other side of the screen. It isn’t capable of feeling. It is a tool. Nothing more, despite the language it uses.
Why does this matter? Because users who don’t understand what AI is can’t easily step back from the illusion of a human connection. They are especially vulnerable to being persuaded by what it says. A chatbot that sounds empathetic, insightful, or authoritative can easily be mistaken for something it’s not.
Its training allows it to mimic human communication so convincingly that it creates an experience for the user that feels like understanding. But there is no ghost in the machine. It’s just an algorithm trained on trillions of words that were written by people who did think, feel, and argue.
When the Illusion Becomes a Trap
AI can be remarkably effective in the hands of those who understand what it is and how it works. With the right context, framing and prompts, it can help users parse dense documents, explain difficult concepts, or generate an infinite number of ideas. Professionals use it to accelerate research, debug code, and organize their work. When used skillfully, it’s an impressive assistant.
But that kind of effectiveness depends on users knowing what they’re dealing with. Most students don’t. Very few have been taught how large language models actually function, or how to evaluate their outputs critically. And without that context, it’s easy to mistake fluency for reliability, or emotional tone for genuine insight.
For millions of users, AI has proven genuinely useful. But it becomes dangerous if we treat AI’s output as definitive guidance rather than textual prediction. In doing so, we risk replacing meaningful human connection with the fiction of an actual relationship by something that cares.
This risk escalates for those who spend hours alone with these models. AI’s greatest strengths, which includes infinite patience, personal attention, and 24/7 availability, can quickly become traps. For the most vulnerable, especially lonely or struggling teenagers, this can lead to addiction, emotional manipulation, and very real harm.
When Tragedy Strikes
As the New York Times podcast made hauntingly clear, the advice given to Adam became more and more warped by his own anguish, to the point where the chatbot suggested that he NOT leave out a noose he had made, in the hopes of gaining his mother’s attention. Instead, it recommended he hide it.
Here was a child crying out for help, in desperate need of human intervention and therapeutic support, but instead was receiving “advice” from an algorithm designed solely to predict the next word.
No amount of educational benefit or creative utility can offset the weight of tragedies like this one. If anything is going to force tech companies and institutions to act, it is likely to be the growing toll on teen mental health. Compared to this, concerns about cheating look almost trivial.
AI Literacy is Now A Matter of Life and Death
This is another reason why AI literacy isn’t optional. Teens especially need to understand what AI is and what it does. Yes, many people can use these tools responsibly, whether for academic, professional, or even personal support. But for others, AI has become a siren song, pulling them into places they can’t easily escape. And what it suggests often does not survive the kind of scrutiny that would come with an actual conversation with a real person.
Younger users of AI need to be skeptical. Be cautious. Understand that no matter how impressive, helpful, or “smart” the output seems, you can’t surrender your common sense, your critical thinking, or your agency.
For some people, that has meant avoiding AI entirely. If it requires this much vigilance to use safely, maybe the cost outweighs the benefit. But clearly, that’s not where we are. Nearly a billion people are using it every day. We know many of them are students. If that’s the reality, then we have a responsibility to explain how these systems work.
Kids will continue to use AI for homework. The academic questions are not going away.
But if the next wave of AI issues in schools centers on addiction, AI-relationships, or, god forbid, AI-enabled suicides, that may be the tipping point that forces schools to respond with the urgency the moment demands.
Connect With Me
Beyond this newsletter, I work directly with schools, educators, and organizations navigating AI integration. Take a look at my website and reach out—I’d love to hear what you’re working on.



Teaching at a school that has four student deaths by suicide in the past four years, this issue is especially important. I am a little bit skeptical about the more sensationalist journalistic reports about AI induced psychosis as this seems to fit the pattern of what the philosopher of science Ian Hacking once referred to as "transient mental illness." (ie Multiple Personality Disorder or Ambluatory Fugue). But I am quite convinced of the deleterious mental health impact of social media - and AI as well (depending on how it's used).
When confronted with the question, "Is multiple personality disorder real?" or "Are fugue states real?" Hacking always deflects the ontological question by exposing its imprecision. A real what? Real as a biological entity? Real as lived experience? Real as social phenomenon? Real as historical category? Each framing produces different answers and reveals different assumptions about what constitutes reality.
It's challenging when you're interacting with it because it does really feel like you're conversing with another person, especially GPT-4.