Peer review is broken. Not bending—broken. And AI-generated garbage is what finally snapped it.
Scientific publishing, the system that's supposed to separate real research from nonsense, is drowning in AI-generated papers, reviews, and citations. The flood is so bad that some journals are now spending more time detecting fake submissions than evaluating real science.
This isn't a theoretical problem. Researchers are finding papers with tell-tale AI phrases like "as a large language model" still in the text. Reference lists full of citations to papers that don't exist. Methods sections that describe impossible experiments. And peer reviews—the supposedly rigorous evaluation process—written by AI that doesn't understand the science it's reviewing.
The open-source project cURL just killed its bug bounty program because people were flooding it with AI-generated bug reports that looked plausible but were technically nonsense. If a simple software project can't handle the spam, imagine what's happening to journals processing thousands of submissions annually.
Here's why this matters beyond academia: scientific publishing is infrastructure. Drug trials, climate research, materials science—they all depend on being able to trust published results. When that system breaks down, the consequences ripple outward.
The problem has multiple layers. First, there's the spam: people using AI to generate fake papers for resume padding or to game academic metrics. Then there's the lazy: researchers using AI to write sections of real papers without checking if it's accurate. And finally, there's the invisible: AI trained on published research now generating text that sounds scientific but subtly mangles the actual science.
Journals are fighting back, but they're in an arms race they're losing. AI detection tools produce false positives that punish legitimate researchers. Manual review is too slow and expensive to scale. Some publishers are trying to verify submissions by checking if referenced papers actually exist—that's the baseline now.
The Atlantic put it bluntly: "Peer review has met its match." The system assumes reviewers and authors are operating in good faith with human-level understanding. AI breaks both assumptions. It can generate content without understanding, and it costs nothing to flood a system with submissions.
What's the solution? There isn't an obvious one. You can't ban AI entirely—plenty of researchers use it legitimately for things like editing or literature searches. Verification doesn't scale. And the financial incentives are all wrong: journals make money from processing papers, not from catching fakes.
Some researchers are calling for a wholesale redesign of scientific publishing. Maybe open peer review where everything is transparent. Maybe requiring all papers to include reproducible code and data. Maybe burning the whole system down and starting over.
But here's the thing: science can't wait for publishing to figure itself out. Right now, researchers have to assume a meaningful percentage of published papers are partially or wholly AI-generated garbage. That's not a bug in the system. That's the system failing at its core function.
The technology is impressive. The impact on our ability to do science is catastrophic. And we're only at the beginning of this problem, not the end.




