Google's AI Mode, now rolling out to all users in the United States, will make AI-powered search the default in the largest and most dominant search engine in the world. By fall 2025, every student will be using AI to conduct “research” - many without realizing it because of how seamlessly it will be embedded into the platform. How should we rethink research when AI can do the heavy lifting for you?
The introduction of Google's AI Mode represents a fundamental transformation in search - and in turn, research - that most educators likely missed as the school year comes to a close. When I tested it with one of our standard high school history research questions, it scanned over 100 sites in seconds, produced a solid overview of the issue, and instantly provided 17 targeted citations. This wasn't a specialized Deep Research model requiring sophisticated prompting. All I had to do was click AI Mode in the search bar. Subsequent follow-ups created a detailed outline, annotated bibliography, and a rough draft of the essay.
As someone who has been teaching research skills for 30 years, other than the development of the internet, I can think of no technology that is upending the research process quite like AI-infused search tools.1 A research task that once might have taken hours or days can now be completed in seconds.
The Skill Gap We're Not Preparing For
Traditional research instruction assumes students will struggle to find useful information. We spend multiple classes teaching how to locate materials, navigate databases, and distinguish between primary, secondary, and tertiary sources. It requires patiently sifting through dead ends and irrelevant information - a process that ideally builds knowledge and invites critical judgment and evaluative thinking throughout. Students need to choose among multiple avenues of inquiry, either by opening new links or carefully reading brief descriptions and summaries before moving on.
AI search mode fundamentally changes how we interact with the internet and any database or repository of source material.
Rather than providing haphazard results for students to evaluate, AI Mode instantly delivers a curated overview with pre-vetted citations and a synthesized summary. While Google has used AI Overview blurbs for months, AI-powered search will now become the default mode.
The new layout produces a much longer AI summary (with the option to “Dive Deeper”) and includes a sidebar of 10 - 20 citations most relevant to your question. Diving Deeper allows you to expand your search inquiry and build out the response into any format like any other chatbot.2
The change is subtle but significant, especially given the prevalence of AI summaries in almost every content platform we use.
The scattershot research approach we've used since the dawn of the internet may simply dissolve into the AI landscape. We may look back on pre-AI research like we now view library card catalogs.
I’ve seen students waste hours unsuccessfully using databases with little to show for it. Boolean searches are unwieldy, unpredictable, and often return results that are either too broad or too narrow. AI-powered search is more efficient and likely to capture more relevant initial results quickly, organized and delivered instantly in any format requested. But it comes at a cost.
Two years ago I worried that ChatGPT might be able to write student research papers. Now Google's search engine can complete entire assignments with zero effort.
The fundamental research skill is shifting from source identification to content analysis - from finding information to reading, organizing, synthesizing, and evaluating it. Almost any research assignment - whether a middle school book report to a more advanced research paper - will be possible to create inside the world’s most popular search engine. How will that fact impact teaching and learning going forward? What’s gained and what’s lost?
Reading Skills Are More Important Than Ever
Headlines regularly highlight writers publishing newspaper articles, academic reports, and legal briefs with fabricated citations. The ease with which AI can create lengthy, detailed and sophisticated documents lulls users into a false sense of security and overwhelms the proofreading and fact-checking process.
AI critics jump on these examples as evidence of AI’s limitations but I think they are missing the point. These are predominantly human errors. As long as AI makes mistakes, there will always need to be a human in the loop.
Humans need to actually read what AI produces. For students, this means teachers must re-emphasize deep reading as a skill - not merely skimming for answers, but evaluating arguments, checking citations, and understanding how sources connect.
While teaching and emphasizing close reading skills is so obvious as to seem beyond dispute, the evidence that even professionals are ignoring it underscores its importance. In a world drowning in AI generated text, those who will thrive will cut through the verbiage and locate the value created in unprecedented information access.
If Google’s AI mode (as well as the more robust Deep Research models) can synthesize unlimited content from hundreds of sources instantly, understanding what we're reading - how it was created and what we intend to do with it - becomes critical.
AI Research as an Accelerant, Not a Replacement
Google’s own examples of AI Search Mode center on personal and commercial use cases, such as researching travel options or comparing company stock trajectories. These are often standalone inquiries that don't require significant follow-up.
Right now, the most valuable academic sources and books are paywalled. This limits AI research effectiveness for professional researchers, though that will likely change as AI models gain access to digitized libraries and databases.3
For students, AI can accelerate the research process, not replace it. A well-crafted prompt can give students a primer on their topic with dozens of citations to pursue. The question is whether students will actually pursue those leads or stop at the AI-generated overview.
Teachers will need new assessment approaches that account for AI research assistance. This might involve:
• Requiring students to document their research process
• Emphasizing synthesis and comparison of sources rather than source discovery
• Teaching students how to interrogate and challenge AI-generated insights
• Focusing on developing original arguments from AI-assembled information
We must teach students that AI overviews - whether from Google's AI Mode or Deep Research tools - are like sophisticated Wikipedia entries. They're excellent starting points, but they're not the destination. The critical follow-up research begins after reading what AI provides.
The Practical Challenge for Fall 2025
Within seconds of testing Google's AI Mode I had an outline, bibliography, and a detailed overview of the topic that would have taken hours to compile manually. The results weren't perfect, but they were remarkably sophisticated starting points.
Most educators are unprepared for this change. Many aren't even aware that Google's AI Search Mode exists or what Deep Research models can do. While academic debates about AI continue, students will be incorporating these tools into their default research behavior.
Rather than lamenting lost skills, educators should re-emphasize the core academic practices that make research valuable.
Human judgment, comparative and evaluation skills, meta-analysis, and context are all required to research and learn. While AI will impact the technical skills of research, the manner in which researchers must engage with information is eternal.
Educators who thrive in an AI saturated world will continue teaching reading, evaluation, and synthesis skills when students have unprecedented information access. By September, traditional research assignments that ignore how easily students can produce volumes of information will be even more vulnerable if these essential skills are neglected.
Fantastic post, Steve! This is precisely the information that everybody teaching undergraduate university level needs to know. Students are using search tools without thinking through the responsibility to vet what the program shows them. You are reminding us that skill is incredibly valuable and has to be taught if we are to get any value from our assignments.
Interesting points. In the long run, students will use AI in their careers without restriction, so I wonder if we might design assignments that are more similar to real-world work, which might discourage generic AI-generated "slop" and instead value unique writing and insights that you can't easily generate from AI. I can imagine it's difficult to grade this fairly and at scale, but maybe we could develop some ways.