Scientists who use AI tools publish three times as many papers and get nearly five times more citations. They advance to leadership positions a year or two faster. By every individual metric, AI is a career accelerator.
So why are some researchers worried that AI is quietly narrowing the scope of science itself?
A sweeping analysis of over 40 million academic papers, led by James Evans at the University of Chicago, reveals an uncomfortable paradox: while AI dramatically boosts individual productivity, it simultaneously pushes research toward a smaller set of well-trodden problems. AI-assisted science, the study finds, occupies a narrower intellectual footprint and generates weaker networks of follow-on engagement between studies.
As Evans puts it: "You have this conflict between individual incentives and science as a whole."
Here's the mechanism. AI systems excel at well-defined, data-rich problems - protein structure prediction, image classification, pattern recognition in massive datasets. These are exactly the kinds of problems where progress is fastest, where publications come easiest, where citations accumulate. They're the tractable questions.
But science doesn't just need answers to tractable questions. It needs people willing to wander into poorly mapped territories, to ask questions where the data doesn't exist yet, where the methodology has to be invented from scratch. Those are the questions that lead to genuine conceptual breakthroughs - and they're exactly the ones AI doesn't naturally gravitate toward.
Without deliberate design choices, AI and the scientists using it cluster around the same popular, safe questions. As Luís Nunes Amaral, a physicist at Northwestern University, warns: "We are digging the same hole deeper and deeper."
This isn't AI's fault. The problem is how we've structured scientific incentives. Publish or perish. Impact factors. Citation counts. Grant success rates. Every part of the system rewards productivity and visible impact. AI amplifies that. If you can triple your publication rate by focusing on AI-friendly problems, and the tenure committee measures you by publication count, what are you going to do?
The study doesn't argue that AI is inherently narrowing. It's that AI deployed within existing incentive structures amplifies the field's existing biases toward incremental work on popular topics. The solution isn't to abandon AI - that ship has sailed - but to restructure how we reward scientific work.
We need to value exploration, not just exploitation. We need to fund risky, weird questions that might go nowhere. We need to recognize that some of the most important science takes a decade to bear fruit and might never accumulate 10,000 citations.
Here's what makes this particularly urgent: science is increasingly AI-mediated. The generation of researchers being trained right now is learning to think with AI, not just use it as a tool. If the questions they're taught to ask are the ones AI is good at answering, we're training a generation to dig that same hole deeper.
Evans is clear that this isn't inevitable. We can design AI systems that encourage exploration. We can build incentives that reward asking unconventional questions. We can structure grants to support high-risk, high-reward research even if it doesn't generate papers every six months.
But we have to do it deliberately. Left to its own dynamics, the system will optimize for what's measurable and immediate. That's not a conspiracy; it's just how incentives work.
The universe is vast and weird and full of phenomena we haven't even noticed yet. It would be a shame if our most powerful tools ended up pointing us all in the same direction, missing the strange signals at the edges because they don't fit the pattern.


