Tech journalism outlet Ars Technica just terminated a reporter who used AI to generate fake quotes in articles. The irony is almost too perfect: a publication that covers technology for a living, undone by the very tools it reports on.
Benj Edwards, a senior AI reporter at the Condé Nast-owned outlet, was fired after using an "experimental Claude Code-based AI tool" to generate paraphrased quotes that he then attributed to a source who never said them. The story was retracted two days after publication, and Edwards is now out.
According to The Wrap's reporting, the problematic article was about an AI agent that generated a negative article about an engineer who rejected its code. Edwards co-wrote it with senior gaming reporter Kyle Orland, who had "no role in this error."
Here's what actually happened, based on Edwards' own explanation: he was recovering from COVID with a high fever and used AI tools to help process content. When one tool hit content policy violations, he pasted text into ChatGPT to understand why. The AI paraphrased rather than quoted verbatim, and Edwards failed to verify the quotes against the original source before publishing.
Editor-in-Chief Ken Fisher's retraction was blunt: the story contained "fabricated quotations generated by an AI tool and attributed to a source who did not say them," violating the outlet's policy requiring AI-generated material be clearly labeled.
Let's be real about what this represents. This is exactly the kind of AI misuse that tech journalists have been warning about for years. The tools are powerful enough to sound authoritative while being factually wrong. They'll confidently generate plausible-sounding quotes that never happened. And if you don't verify, you'll publish fabrications.
