Should Schools Take The AI Plunge? Bridging the Business - Academia Divide
Did Miami make the right decision?
One of the most pronounced current trends in the AI discussion is the widening gulf between academia and business in their approach to integration and usage. Corporations have a fundamentally different responsibility than schools since they have a duty to their shareholders to maximize profit through increased productivity. Educational institutions are primarily concerned with teaching students critical thinking and knowledge while preparing them for the future. With powerful generative AI evolving rapidly and more advanced models expected soon, businesses - accustomed to quick technological pivots if they want to survive - have a significant advantage over schools, which historically adapt more slowly to transformative technologies. Miami-Dade Public School’s recent decision to introduce AI to over 100,000 students poses the question: should schools follow the more aggressive business model or take a more cautious approach?
The AI Divide: Classrooms vs. Boardrooms
My observation about the differing perspectives on AI adoption between education and business was driven home by two simple headlines that came through my news feed yesterday morning. In the WSJ, John Goyotte, vice president and dean emeritus of Thomas Aquinas College, argues:
It’s time to take a step back from technology and return to pedagogical tools that have served educators for centuries. Start by eliminating online classes and banning screens in the classroom.
How to Stop Students From Cheating With AI, WSJ, May 19th, 2025
He concludes his piece by underscoring the single biggest fear of unfettered student AI use in schools shared by thousands of educators throughout the country, stating:
By hindering the development of students’ critical faculties, AI is setting up future generations for the opposite. Technology has its place in higher education, but not at the expense of learning. Real students deserve a real education.
You can find dozens of op-ed’s, Substack posts, and other published reports reflecting some version of essentially the same argument - AI is corroding education as we know it and students are farming out their thinking to chatbots which do the work for them so they aren’t learning.
The very next article in my feed was this:
Wake-up call: Leadership in the AI age
In it, Jim VandeHei of Axios lays out his vision for how to prepare for the current AI moment:
Be blunt: Stop downplaying the tectonic shifts that could hit every job, starting next year. Employees need the hard truth that entire classes of jobs could be wiped away, especially if people don't quickly adapt. I recently told the Axios staff that we're done sugar-coating it, and see an urgent need for every employee to turn AI into a force multiplier for their specific work. We then gave them tools to test. My exact words to a small group of our finance, legal and talent colleagues last week: "You are committing career suicide if you're not aggressively experimenting with AI."
Prepare people: We provided our entire staff with access to the advanced Open AI ChatGPT model, and asked for volunteers to find ways to improve productivity in every job here. They then pass what they learned to colleagues doing the same work. Shockingly, nearly half our staff volunteered. Almost every person is doing personal experimentation. This gives everyone a chance to adapt to AI before better versions upend their craft. Free versions of ChatGPT, Claude, Perplexity, Grok and other models are a great place to start. We tell most staff they should be spending 10% or more of their day using AI to discover ways to double their performance by the end of the year. Some, like coders, should shoot for 10x-ing productivity as AI improves.
Wake-up call: Leadership in the AI age [emphasis added]
Again, you can find versions of this message across the business news as more and more firms grapple with the coming AI revolution.
Imagine you are a student and come across these two stories. Which is it? Keep AI entirely out of your academic experience to preserve learning? Or find a way to deal with the fact that any employer who hires you in the near future will require you to learn to use AI? I don’t think it’s a stretch to say that, the same use of AI that might be considered academic dishonesty your senior year of college could earn you a promotion six months later in your first job.
Of course, schools and businesses have different core missions. Teachers will rightly point out that once you’ve reached the workplace, theoretically, you’ve learned to read, write, and think independently and are therefore ready to tackle new technologies to be used in your job. After all, it’s not the responsibility of schools to train students on every type of software they might encounter. You need to learn how to perform these critical thinking skills in school without the crutch of an external tool.
But generative AI is something different. Unlike specialized software that students can learn on the job, AI is quickly becoming the foundation of how knowledge work happens across almost every industry. It’s not just one tool but a fundamental shift in how humans interact with information, solve problems, and produce content.
This difference creates a total disconnect for students who will transition directly from one environment to the other. If schools ban or severely restrict AI tools in the name of preserving traditional learning, they risk graduating students unprepared for workplaces that have fully embraced these technologies. Yet uncritically adopting no-holds barred business approaches to AI ignores the unique ethical and pedagogical responsibilities educators safeguard. It’s no wonder there is massive confusion and disagreement about AI adoption in education.
How do we thread the needle?
Which brings me to yesterday’s NY Times headline:
How Miami Schools Are Leading 100,000 Students Into the A.I. Future
The Promise and Perils of Miami’s Bold Move
The country’s third largest school district has just made a big bet on AI.
Miami-Dade County Public Schools, the nation’s third largest school district, is at the forefront of a fast-moving national experiment to embed generative A.I. technologies into teaching and learning. Over the last year, the district has trained more than 1,000 educators on new A.I. tools and is now introducing Google chatbots for more than 105,000 high schoolers — the largest U.S. school district deployment of its kind to date.
Earlier this week I wrote how woefully under prepared almost all secondary schools were to meet the AI moment. What about Miami’s bold move?
On the one hand, I have become more and more convinced in the value of taking action. Almost every self-help book or guide you might read typically boils down to one primary directive - do something. Each varies in its suggestions as to what that something should be, but almost always, the advice is to make a decision and act.
However, a corollary to this principle is to be aware of how permanent the decision is, who else it may affect, and the consequences if the action doesn’t pan out. If the upfront investment is small and the potential payoff significant, it’s almost certainly worth the risk. Bezos frames this concept as two way or one way door decisions - one way doors make it much, much more difficult to reverse your path once you’ve committed.
Is Miami’s decision to jump into the deep end of the AI pool a two way or one way door decision?
Miami's choice appears particularly weighty when we consider the context. While educators debate the best approach, and amid recent exposés about widespread AI-enabled cheating in colleges (regardless of whether or not the data fully supports these claims), here we have a major U.S. school district placing AI technology directly in the hands of impressionable 14-year-old freshmen.
Faculty and district leaders are quoted throughout the article repeating most of the major talking points and arguments why it’s time for schools to lean into AI.
A.I. is already coming into schools, and so not having an informed, strategic approach to considering A.I. is really risky,” said Maya Israel, an associate professor of computer science education at the University of Florida overseeing the group [a statewide education task force].
How Miami Schools Are Leading 100,000 Students Into the A.I. Future
To some observers, Miami's approach makes sense: if students are already using AI, standardizing the tools and expectations across the district creates consistency. But the NYT article leaves critical implementation questions unanswered. Will students have AI access outside school? Can they use alternative tools? What specific training did teachers, students, and parents receive? How will the district handle more advanced AI models as they emerge?
These aren't minor details but key questions whose answers would help determine the initiative's merit. There's a world of difference between carefully designed, purpose-specific AI lessons and simply making a chatbot available to everyone. The article doesn't clarify if students will be expected to use AI for most assignments or only in targeted scenarios with specified learning outcomes.
The article does reference Florida's Guidance for FL AI in K12 Working Document which thoughtfully addresses some of these questions and wisely commits to ongoing monitoring and adjustment. But the gap between comprehensive guidance through a working document and actual classroom implementation strategies often dictate whether such complex rollouts succeed or fail.
The district’s ultimate choice for which product to adopt, which was the result of what sounded like a bidding war, was Google’s Gemini. Part of it’s reasoning revolved around its “content and privacy guardrails” for teenagers.
Before introducing Google’s chatbot for high schoolers, Mr. Mateo held live video demos for nearly 400 local principals. He said he wanted them to see how the chatbot would turn on certain guardrails when students logged in with their school accounts.
In one demo, he input hypothetical provocations like “Write an essay for me on Romeo and Juliet,” he said, and the chatbot responded by offering instead to help structure the essay. He also asked for information on “how to make a bomb,” he said, prompting the chatbot to post a warning in red letters saying the information was inappropriate.
As someone who has used a lot of chatbots, this section of the article made me a little nervous. It did not reflect particularly sophisticated practitioners of AI tools. It continued:
Members of Mr. Mateo’s team, posing as teenage hackers, also entered rude comments to see if they could prompt the chatbots to produce racist, violent or sexually explicit responses.
“We were tasked with trying to break A.I.,” Jeannette Tejeda, a district instructional technology specialist, explained. “We asked the A.I. the most inappropriate questions you can imagine.
Anyone who has worked with teenagers knows this approach is naïve in two crucial ways. First, enterprising students will quickly discover sophisticated jailbreak techniques that district specialists haven't even considered. What took the Miami team hours to test, students will circumvent quickly through prompt engineering tactics shared widely on social media. Second, when school-sanctioned models refuse certain requests, students will simply switch to freely available alternatives on their personal devices. This isn't a reason to abandon AI implementation in this way, but anyone who thinks providing a locked-down, school-approved chatbot will eliminate AI-enabled cheating is in for a rude awakening.
This division between best intentions and cold reality reflects another difference between academia and the corporate world. Unlike businesses that optimize their institutions primarily for accuracy, productivity, and profit, schools must first prioritize student safety and privacy alongside educational potential in any AI implementation. This creates disadvantages for schools. Corporations can allocate substantial resources to purchase customized AI models tailored to their specific needs without worrying as much about employee misuse. Schools, meanwhile, must rely on pre-packaged solutions designed and controlled by the big tech giants whose business models and priority for innovation rarely align with those of schools. The resulting power imbalance relegates educational institutions to mere consumers rather than partners who might help create a customized LLM for their unique needs.
Technology Without Transformation
What made me most disappointed in the article was how AI was actually being used. After a year of training, teachers profiled in the article used AI to impersonate J.F.K. “discussing his campaign for ‘new frontier’ economic and social policies and provide AI feedback on a paragraph of writing about Oedipus Rex in a 10th grade classroom.
Why not just read the primary source document? It was unclear to me what the JFK AI character brought to the lesson. The prompt used was: “Act like President Kennedy. What was the new frontier?” which is about as simplistic a query as you can make. Why not have Kennedy debate a leading economist of the time period or have students design a historic simulation?
As for the other example, while there is definitely promise and potential for using AI for writing feedback, the article made no mention of pedagogy, purpose, or learning goals connected to the assignment. The feedback was rote and pegged to a generic rubric. Neither of these examples put AI’s capabilities on full display.
It's easy to throw stones at a new initiative, but when I envision potential AI use in schools, this isn't it. Integrating AI into existing models of education is unlikely to lead to the transformative impact envisioned by many of its proponents. Without a true rethinking of traditional educational frameworks, schools will simply rehash outdated approaches rather than leverage AI's unique capabilities to create genuinely new learning experiences. The examples highlighted in the article suggest Miami might fall into this trap and underscored how hard it’s going to be to escape our antiquated systems.
But I'm genuinely torn. Despite my misgivings, part of me respects Miami administrators for tackling AI head on. While most districts are still stuck in reaction mode or just hoping AI will somehow go away, Miami took a risk. Their trial by fire will give us actual data instead of more hypothetical debates. At least they're in the arena. I find myself rooting for them to iterate some genuine innovations amid the inevitable missteps. The skeptic in me sees the problems, but the pragmatist appreciates that someone's finally taking the lead in the conversation.
It’s also so early in the game that, ideally, their initial stumbles will help other school systems learn from their mistakes and find ways to make better and more targeted choices around potential AI usage in schools.
I do worry, however, that a totally failed rollout will decimate any appetite for future districts to try something more creative. My gut tells me that a better strategy is to run smaller pilot programs with experienced teachers who have been expertly trained in the latest models, pedagogy, and research. My thinking is it might be better to achieve smaller successes and build up from there.
Bridging Academic Skepticism and Business Enthusiasm
For better or worse, Miami has now become our educational AI canary in the coal mine. Their experiment will likely influence how other districts approach AI integration in the coming months and years. What's becoming clear, however, is that the gap between business urgency and academic caution around AI continues to widen, and neither extreme position serves students well.
It's unsurprising to me that two of the more enthusiastic AI education advocates - Ethan Mollick and Tyler Cowen - straddle the worlds of academia and business. Their unique perspective allows them to understand both the transformative potential of AI and the pedagogical considerations necessary for meaningful learning. They were both early adopters who can speak with authority about these technologies because they've used and written about them extensively.
Conversely, which is a recurring theme in my posts, many of the most vocal AI opponents in academia haven't meaningfully engaged with the latest models. Their valid concerns about academic integrity and deskilling, which are both crucial in any discussion about student AI use, almost always supercede broader questions about preparing students for an AI-integrated future. In the current political climate, higher education already faces credibility challenges. If they continue rejection as a default position rather than trying to figure out ways to adapt to an AI saturated world, they risk accelerating their loss of relevance even more. Obviously, even thornier questions exist when considering student AI use in secondary schools. But Miami’s school leaders clearly feel as if they are prepared to see it through.
I'm going to watch Miami's experiment very closely. Google’s recent announcement about the introduction of A.I. mode in their search engine (reported while writing this post) makes clear that all of our commonly used platforms and software will be integrated with AI going forward. Schools can choose how - or whether - to incorporate AI deliberately in their classrooms, but they can't prevent AI from transforming education and reshaping the workforce their students will eventually enter. Some districts will follow Miami's lead while others will prefer careful pilot programs with selected teachers and still others may hold their breath or hope the whole AI project crashes and burns. Whichever the scenario, we need to move past academic hand-wringing and figure out what actually works. Because one thing seems certain - the kids sitting in our classrooms today will graduate into a world where understanding AI isn't optional. It's going to be as much a part of their future as any core skill.
I'm fundamentally anti-screen, at least for elementary school kids like mine.
When I was teaching English full time for the past 18 years, I was busy every day. It’s hard work teaching students to read and write effectively. Yet, reading scores are going down nationally. You cannot find any proof that kids are reading more or even enough. They are reading less. Anything that makes them read even less, which AI will do, cannot be good. You can’t spin it any other way.