Fantastic post, Steve! This is precisely the information that everybody teaching undergraduate university level needs to know. Students are using search tools without thinking through the responsibility to vet what the program shows them. You are reminding us that skill is incredibly valuable and has to be taught if we are to get any value from our assignments.
Thanks, Howard. The scary thing to me is that, based on the trajectory we're seeing with AI integration, it's going to be so ingrained in everything (it is to some extent already) that future generations of students (I have a 5th grader) will not even realize they are using AI. I wrote a post 4 months ago about the importance of whether students should "choose" to use AI in their academic work. In less than a half a year, that post seems superfluous - EVERYONE who uses Google will likely be using AI whether they want to or not. The question now I think is what do we do about that. I know there are still resisters out there but I just don't know how they exist in a world where we will be surrounded by it on all fronts.
Interesting points. In the long run, students will use AI in their careers without restriction, so I wonder if we might design assignments that are more similar to real-world work, which might discourage generic AI-generated "slop" and instead value unique writing and insights that you can't easily generate from AI. I can imagine it's difficult to grade this fairly and at scale, but maybe we could develop some ways.
It would be terrible if Google, an advertising company, succeeds with AI and makes it as awful experience as Google search is. Google search is anti-knowledge or productive.
I worked in higher ed for nearly 20 years, advising students and teaching at a large public university, before leaving a few years ago.
I can promise you: educators are not ready for what’s coming.
AI is going to force a complete shift in how we teach. Even writing a dissertation could soon take little effort or energy. That alone raises big questions.
But what concerns me most isn’t the speed or ease, it’s the accuracy.
When students rely on AI tools to generate citations, they’re often pulling from questionable sources. Paywalled, peer-reviewed research is frequently bypassed in favor of blogs or secondary references with no original data. I’ve seen citations to personal websites or unsourced claims that look legitimate unless you double check.
The problem? Most people won’t double check. Many never have.
If that becomes the norm, we’re looking at a future filled with research papers and articles based on weak or worse, false information.
Agreed. Volume is being favored over precision. But here’s a thought also - what happens if the accuracy problem is more or less solved? What if AI papers do get as good as human researchers? I feel like that’s almost more frightening in some ways.
Fascinating post. I appreciate you offering solution ideas instead of settling for just doom and gloom. I think that teaching the skills behind a proper research process should be preserved at all costs. Emphasizing deep reading, evaluating sources, comparing arguments, documenting the process, etc. in academia will lessen the degree students’ research practices are affected by the expedient and opaque features AI tools offer. After all, solid research relies heavily on conducting a rigorous process over solely aiming for an impressive product.
Process over product is going to be key for instruction. As researchers mature, they can take on larger and larger AI "products" and work with them effectively, but for the novice, Deep Research reports can be overwhelming unless you know how to prompt carefully. But I told my students today that close-reading is going to become an even more valuable skill, especially given how much student reading scores are dropping in the U.S. You want to be the one to catch the "hallucination" in the report, not the one submitting something with a "Do you need anything else?" at the bottom of it.
"AI Mode instantly delivers a curated overview with pre-vetted citations and a synthesized summary." This is amazing! Although there are a couple of minor drawbacks. The overview is often fundamentally inaccurate, the citations often don't exist and the fundamental details are almost always wrong. Other than that, it's basically perfect and we absolutely should be promoting it as an easy shortcut ("accelerant") rather than encouraging students to actually do research.
Hmmm. So what will be the future of research? I'm not sure what models you're using but to say that the overview is inaccurate, incorrect, or wrong is just not true. There are still issues, but if you prompt it carefully you will get good results. I'm not sure who is promoting it so much as it's simply where we are headed - AI will be the predominant way most future grad students will do digital research whether we like it or not.
The future of research will be a continuation of the model that's produced solid results for centuries. Inserting a lazy shortcut that is little more than a fancier magic 8-ball is not the way. LLMs do not understand what they are interacting with, they simply guess the next word in the sentence with the added spin of applying preference based on your user profile. There is no solution to hallucinations. There is no solution to the inaccuracies inherent to the LLM's total lack of subject knowledge. The companies producing them prioritise fast, shallow, sycophantic interactions because that's what the overwhelming majority of people want. And that is what research will become if we continue to lionise the current AI toolset. If this is how future grad students are going to do research, then we should cut science funding and spend it on something that will actually be useful. Seances and crystal reading, perhaps.
Saying that good prompts can get around all of these misses the wood for the trees and blames the user for the tool's failures. Saying that AI can draw connections that humans might miss obscures the fact that drawing a hundred random conclusions, a couple of which might be correct, is not a useful approach. If you follow up on the stories about amazing AI breakthroughs you will quickly find that they are debunked shortly afterwards during peer review, but there's no money in advertising the failures. But there is in falsely pushing their success. The tech companies are spending huge advertising budgets and pushing AI out, unasked (try getting a Meta or Google or Microsoft product without an AI attached), because they need a return on their massive investment.
AlphaFold? Sorry, I think it's way too premature to make those kinds of declarative judgments. I think it's hard to deny that AI will became part of the research process.
Thank you for your thoughtful and polite responses in the face of my cynicism. I agree that it will likely become a part of the research process (humans are way too vulnerable to lazy shortcuts) but I mourn the progress that has and will be lost as a result. Hopefully time will prove me wrong, but I think we will need a revolutionary rather than evolutionary leap from the existing toolsets and it seems pretty clear that isn't on the cards.
The impression I have..what remains unchanged and is only increasing in importance is the ability to ask good questions as the basis of conducting research.
Thanks, Matt. That's a huge point I forgot to cover in the post - the ability to ask unique, insightful, and powerful questions which might force the models to do what they do best - draw connections among disparate sources that humans might miss - will absolutely be one of the most critical skills in the new age.
I love this post. You’re clearly an educator: you’re seeing a deeper argument than most are writing about today. You’re connecting the dots to answers that you don’t want to answer directly; but ones that you’re equipping us to answer on our own. It’s beautiful in that way. And in that, you’re highlighting one of the open questions I still have about AI: what will be missed…
“It requires patiently sifting through dead ends and irrelevant information - a process that ideally builds knowledge and invites critical judgment and evaluative thinking throughout.”
If the answer is just given to us, how much creativity will we lose? How much critical thinking will we be capable of? Will we be able to make creative leaps that bridge to new ideas?
Thanks, Jason. It's definitely top of mind for me as I watch the entire edifice of "standard" research practices crumble under the tsunami of AI. There are definitely benefits to be had, but I'm just not certain how much people realize what's going to change until it's already gone. I don't necessarily think the primary way most kids "research" today is anything sacred - many just write a few words in Google, click the first couple of links, and then call it a day - but there is value in not knowing something and having to work to get it. I'm fascinated by the whole thing but also really unsure about where it's all headed.
That's a great tip. I'm wondering if at some point they will just make AI the default. But now that it's essentially going to be the norm, I don't know how it goes backwards. Already kids are using ChatGPT as if it's Google so I think this is how it's fighting back. If this is 2025, where are we going to be in 2026? 2027? 2030?
Great point! Exactly why I started my newsletter on parenting & AI because these kinds of questions shouldn’t only be in the heads of educators but in conversations at home.
Great post. It sounds like Google AI search is better than Perplexity? And have you asked it yet how Google is going to replace all the search advertising revenue in an AI search world?
Don’t know if it’s better than Perplexity yet, but since it’s the world’s largest search engine, it means everyone will be using it. And, yes, lots of people are writing about whether this will destroy their advertising model but they seemed optimistic (of course) at the Google I/O 2025 conference it wouldn’t hurt their bottom line. We’ll see.
Very helpful post, Steve. Many thanks. As I was reading, I was trying to think of how student research might become more local, in person, and face to face. Maybe they could then try to bring that research into dialogue with what the AI has come up with.
Fantastic post, Steve! This is precisely the information that everybody teaching undergraduate university level needs to know. Students are using search tools without thinking through the responsibility to vet what the program shows them. You are reminding us that skill is incredibly valuable and has to be taught if we are to get any value from our assignments.
Thanks, Howard. The scary thing to me is that, based on the trajectory we're seeing with AI integration, it's going to be so ingrained in everything (it is to some extent already) that future generations of students (I have a 5th grader) will not even realize they are using AI. I wrote a post 4 months ago about the importance of whether students should "choose" to use AI in their academic work. In less than a half a year, that post seems superfluous - EVERYONE who uses Google will likely be using AI whether they want to or not. The question now I think is what do we do about that. I know there are still resisters out there but I just don't know how they exist in a world where we will be surrounded by it on all fronts.
Interesting points. In the long run, students will use AI in their careers without restriction, so I wonder if we might design assignments that are more similar to real-world work, which might discourage generic AI-generated "slop" and instead value unique writing and insights that you can't easily generate from AI. I can imagine it's difficult to grade this fairly and at scale, but maybe we could develop some ways.
It would be terrible if Google, an advertising company, succeeds with AI and makes it as awful experience as Google search is. Google search is anti-knowledge or productive.
I worked in higher ed for nearly 20 years, advising students and teaching at a large public university, before leaving a few years ago.
I can promise you: educators are not ready for what’s coming.
AI is going to force a complete shift in how we teach. Even writing a dissertation could soon take little effort or energy. That alone raises big questions.
But what concerns me most isn’t the speed or ease, it’s the accuracy.
When students rely on AI tools to generate citations, they’re often pulling from questionable sources. Paywalled, peer-reviewed research is frequently bypassed in favor of blogs or secondary references with no original data. I’ve seen citations to personal websites or unsourced claims that look legitimate unless you double check.
The problem? Most people won’t double check. Many never have.
If that becomes the norm, we’re looking at a future filled with research papers and articles based on weak or worse, false information.
Agreed. Volume is being favored over precision. But here’s a thought also - what happens if the accuracy problem is more or less solved? What if AI papers do get as good as human researchers? I feel like that’s almost more frightening in some ways.
Well, that’s a thought. At some point it probably will be, and not too far into the future, either.
Once you’re done, I suggest verifying the results using a non-enhanced search.
Fascinating post. I appreciate you offering solution ideas instead of settling for just doom and gloom. I think that teaching the skills behind a proper research process should be preserved at all costs. Emphasizing deep reading, evaluating sources, comparing arguments, documenting the process, etc. in academia will lessen the degree students’ research practices are affected by the expedient and opaque features AI tools offer. After all, solid research relies heavily on conducting a rigorous process over solely aiming for an impressive product.
Process over product is going to be key for instruction. As researchers mature, they can take on larger and larger AI "products" and work with them effectively, but for the novice, Deep Research reports can be overwhelming unless you know how to prompt carefully. But I told my students today that close-reading is going to become an even more valuable skill, especially given how much student reading scores are dropping in the U.S. You want to be the one to catch the "hallucination" in the report, not the one submitting something with a "Do you need anything else?" at the bottom of it.
"AI Mode instantly delivers a curated overview with pre-vetted citations and a synthesized summary." This is amazing! Although there are a couple of minor drawbacks. The overview is often fundamentally inaccurate, the citations often don't exist and the fundamental details are almost always wrong. Other than that, it's basically perfect and we absolutely should be promoting it as an easy shortcut ("accelerant") rather than encouraging students to actually do research.
Hmmm. So what will be the future of research? I'm not sure what models you're using but to say that the overview is inaccurate, incorrect, or wrong is just not true. There are still issues, but if you prompt it carefully you will get good results. I'm not sure who is promoting it so much as it's simply where we are headed - AI will be the predominant way most future grad students will do digital research whether we like it or not.
The future of research will be a continuation of the model that's produced solid results for centuries. Inserting a lazy shortcut that is little more than a fancier magic 8-ball is not the way. LLMs do not understand what they are interacting with, they simply guess the next word in the sentence with the added spin of applying preference based on your user profile. There is no solution to hallucinations. There is no solution to the inaccuracies inherent to the LLM's total lack of subject knowledge. The companies producing them prioritise fast, shallow, sycophantic interactions because that's what the overwhelming majority of people want. And that is what research will become if we continue to lionise the current AI toolset. If this is how future grad students are going to do research, then we should cut science funding and spend it on something that will actually be useful. Seances and crystal reading, perhaps.
Saying that good prompts can get around all of these misses the wood for the trees and blames the user for the tool's failures. Saying that AI can draw connections that humans might miss obscures the fact that drawing a hundred random conclusions, a couple of which might be correct, is not a useful approach. If you follow up on the stories about amazing AI breakthroughs you will quickly find that they are debunked shortly afterwards during peer review, but there's no money in advertising the failures. But there is in falsely pushing their success. The tech companies are spending huge advertising budgets and pushing AI out, unasked (try getting a Meta or Google or Microsoft product without an AI attached), because they need a return on their massive investment.
AlphaFold? Sorry, I think it's way too premature to make those kinds of declarative judgments. I think it's hard to deny that AI will became part of the research process.
Thank you for your thoughtful and polite responses in the face of my cynicism. I agree that it will likely become a part of the research process (humans are way too vulnerable to lazy shortcuts) but I mourn the progress that has and will be lost as a result. Hopefully time will prove me wrong, but I think we will need a revolutionary rather than evolutionary leap from the existing toolsets and it seems pretty clear that isn't on the cards.
The impression I have..what remains unchanged and is only increasing in importance is the ability to ask good questions as the basis of conducting research.
Thanks, Matt. That's a huge point I forgot to cover in the post - the ability to ask unique, insightful, and powerful questions which might force the models to do what they do best - draw connections among disparate sources that humans might miss - will absolutely be one of the most critical skills in the new age.
I love this post. You’re clearly an educator: you’re seeing a deeper argument than most are writing about today. You’re connecting the dots to answers that you don’t want to answer directly; but ones that you’re equipping us to answer on our own. It’s beautiful in that way. And in that, you’re highlighting one of the open questions I still have about AI: what will be missed…
“It requires patiently sifting through dead ends and irrelevant information - a process that ideally builds knowledge and invites critical judgment and evaluative thinking throughout.”
If the answer is just given to us, how much creativity will we lose? How much critical thinking will we be capable of? Will we be able to make creative leaps that bridge to new ideas?
Thanks, Jason. It's definitely top of mind for me as I watch the entire edifice of "standard" research practices crumble under the tsunami of AI. There are definitely benefits to be had, but I'm just not certain how much people realize what's going to change until it's already gone. I don't necessarily think the primary way most kids "research" today is anything sacred - many just write a few words in Google, click the first couple of links, and then call it a day - but there is value in not knowing something and having to work to get it. I'm fascinated by the whole thing but also really unsure about where it's all headed.
Thanks for raising this, Steve! I certainly would like to be able to choose when to use genAI and not.
So far, I’ve been adding -AI at the end of my Google search and it REMOVES the AI answers.
Thought maybe others may want to try that too.
That's a great tip. I'm wondering if at some point they will just make AI the default. But now that it's essentially going to be the norm, I don't know how it goes backwards. Already kids are using ChatGPT as if it's Google so I think this is how it's fighting back. If this is 2025, where are we going to be in 2026? 2027? 2030?
Great point! Exactly why I started my newsletter on parenting & AI because these kinds of questions shouldn’t only be in the heads of educators but in conversations at home.
Super helpful Steve, especially as I gear up for my "Academic Reading" course for PhD students next Fall. Thanks
Great post. It sounds like Google AI search is better than Perplexity? And have you asked it yet how Google is going to replace all the search advertising revenue in an AI search world?
Don’t know if it’s better than Perplexity yet, but since it’s the world’s largest search engine, it means everyone will be using it. And, yes, lots of people are writing about whether this will destroy their advertising model but they seemed optimistic (of course) at the Google I/O 2025 conference it wouldn’t hurt their bottom line. We’ll see.
Very helpful post, Steve. Many thanks. As I was reading, I was trying to think of how student research might become more local, in person, and face to face. Maybe they could then try to bring that research into dialogue with what the AI has come up with.