Here's an uncomfortable truth about AI coding assistants: they might be making experienced developers slower, not faster. A study from METR found that developers using AI tools were 19% slower on complex tasks, and a follow-up study basically confirmed the same result. Meanwhile, 93% of developers are now using these tools, and most report feeling about 20% faster.
That perception gap is fascinating. Everyone feels more productive while the clock says otherwise. The Reddit post breaking down the study nails why: "Writing code was never the bottleneck for experienced devs. Copilot bangs out a function in 2 seconds but then you spend 10 minutes reading it, verifying edge cases, checking if it fits the architecture you actually have."
This matches what I've seen talking to engineers. AI tools are incredible at generating boilerplate, completing patterns, and suggesting implementations. The generation is basically free. But the review cost went up because you're reading code you didn't write and don't fully understand. You have to verify everything because the AI is confidently wrong just often enough to be dangerous.
The study found that 46% of developers don't fully trust AI output - yet only a third systematically verify it. So we're in this weird zone where we know the tools make mistakes, we don't trust them completely, but we're also not checking carefully enough. That's how bugs ship.
The core issue is that AI coding assistants optimized for the wrong bottleneck. GitHub Copilot, Cursor, and the rest focused on making code generation faster. But for experienced developers, the constraint was never typing speed - it was understanding the problem, architecting the solution, and making sure the code actually does what you need.
Junior developers might see different results. If you're still learning patterns and don't have strong architectural opinions, AI suggestions can be genuinely helpful. But for senior engineers, the tools accelerated the cheap part (writing code) and added cost to the expensive part (thinking about code).
Nobody wants to admit this publicly because everyone has bet their productivity gains on AI tooling. But if you've noticed your PR velocity went up while your actual feature delivery slowed down, you're not imagining it. You're shipping more code that does less, faster.
