Scientists who use AI tools publish three times as many papers and get nearly five times more citations. They advance to leadership positions a year or two faster. By every individual metric, AI is a career accelerator.
So why are some researchers worried that AI is quietly narrowing the scope of science itself?
A sweeping analysis of over 40 million academic papers, led by James Evans at the University of Chicago, reveals an uncomfortable paradox: while AI dramatically boosts individual productivity, it simultaneously pushes research toward a smaller set of well-trodden problems. AI-assisted science, the study finds, occupies a narrower intellectual footprint and generates weaker networks of follow-on engagement between studies.
As Evans puts it: "You have this conflict between individual incentives and science as a whole."
Here's the mechanism. AI systems excel at well-defined, data-rich problems - protein structure prediction, image classification, pattern recognition in massive datasets. These are exactly the kinds of problems where progress is fastest, where publications come easiest, where citations accumulate. They're the tractable questions.
But science doesn't just need answers to tractable questions. It needs people willing to wander into poorly mapped territories, to ask questions where the data doesn't exist yet, where the methodology has to be invented from scratch. Those are the questions that lead to genuine conceptual breakthroughs - and they're exactly the ones AI doesn't naturally gravitate toward.
Without deliberate design choices, AI and the scientists using it cluster around the same popular, safe questions. As Luís Nunes Amaral, a physicist at Northwestern University, warns: "We are digging the same hole deeper and deeper."
This isn't AI's fault. The problem is how we've structured scientific incentives. Publish or perish. Impact factors. Citation counts. Grant success rates. Every part of the system rewards productivity and visible impact. AI amplifies that. If you can triple your publication rate by focusing on AI-friendly problems, and the tenure committee measures you by publication count, what are you going to do?
The study doesn't argue that AI is inherently narrowing. It's that amplifies the field's existing biases toward incremental work on popular topics. The solution isn't to abandon AI - that ship has sailed - but to restructure how we reward scientific work.
