Peer review is broken. Not bending—broken. And AI-generated garbage is what finally snapped it.
Scientific publishing, the system that's supposed to separate real research from nonsense, is drowning in AI-generated papers, reviews, and citations. The flood is so bad that some journals are now spending more time detecting fake submissions than evaluating real science.
This isn't a theoretical problem. Researchers are finding papers with tell-tale AI phrases like "as a large language model" still in the text. Reference lists full of citations to papers that don't exist. Methods sections that describe impossible experiments. And peer reviews—the supposedly rigorous evaluation process—written by AI that doesn't understand the science it's reviewing.
The open-source project cURL just killed its bug bounty program because people were flooding it with AI-generated bug reports that looked plausible but were technically nonsense. If a simple software project can't handle the spam, imagine what's happening to journals processing thousands of submissions annually.
Here's why this matters beyond academia: scientific publishing is infrastructure. Drug trials, climate research, materials science—they all depend on being able to trust published results. When that system breaks down, the consequences ripple outward.
The problem has multiple layers. First, there's the spam: people using AI to generate fake papers for resume padding or to game academic metrics. Then there's the lazy: researchers using AI to write sections of real papers without checking if it's accurate. And finally, there's the invisible: AI trained on published research now generating text that sounds scientific but subtly mangles the actual science.
Journals are fighting back, but they're in an arms race they're losing. AI detection tools produce false positives that punish legitimate researchers. Manual review is too slow and expensive to scale. Some publishers are trying to verify submissions by checking if referenced papers actually exist—that's the baseline now.
The Atlantic put it bluntly: "Peer review has met its match." The system assumes reviewers and authors are operating in good faith with human-level understanding. AI breaks both assumptions. It can generate content without understanding, and it costs nothing to flood a system with submissions.
