An engineer with 11 years of experience hit a production bug last month and realized something unsettling. They'd forgotten how to debug without AI assistance. Not struggled without it. Forgotten.
The problem was classic - intermittent network timeout in a service they'd written themselves two years earlier. The kind of bug that used to mean an hour of methodical investigation. Check the logs. Verify the connection pool. Look at load balancer timeouts. Rule out retry storms. Build hypotheses, test them, iterate.
Instead, they opened Claude, described the symptom, got a hypothesis, hit a dead end, fed that back, got another hypothesis. Forty minutes later they hadn't found the bug. They'd just been following suggestions.
"At some point I closed the chat and tried to work through it myself," they wrote in a post that went semi-viral in developer communities. "And I realized I had forgotten how to just sit with a problem. My instinct was to describe it to something else and wait for a direction. The internal monologue that used to generate hypotheses, that voice that says maybe check the connection pool, maybe it's a timeout on the load balancer side. That voice was quieter than it used to be."
They found the bug eventually. It took longer without AI than it would have taken them three years ago, also without AI. That delta is the concerning part.
The comparison they kept coming back to was GPS. You can navigate anywhere with GPS. It's faster, more reliable, eliminates guesswork. But if you use it exclusively for five years and then lose signal, you don't just lack information. You lack the mental map you would have built navigating manually. The skill and the mental model degrade together. Not from disuse - from outsourced use.
Developers in the thread recognized the pattern immediately. "I've been noticing this in myself," wrote one with 7 years of experience. "The hypothesis generation muscle atrophies fast." Another pointed out the compounding effect: "Junior devs who start with AI assistance are never building that muscle to begin with. They won't even know what they're missing."
To be clear: AI coding tools are useful. They're often legitimately faster for boilerplate, refactoring, and pattern-matching tasks. The productivity gains are real and measurable. But there's a specific cognitive capability at risk here - the ability to generate and test hypotheses under uncertainty when the problem space is ambiguous.
That's not the kind of skill you can regain by reading documentation. It's built through hundreds of hours of being stuck and unsticking yourself. And if AI tools are good enough that you rarely experience that friction anymore, you stop building the skill. The productivity boost comes at a cost that's invisible until you suddenly need a capability you no longer have.
One commenter summarized it well: "AI is like autocomplete for thinking. It works great until you need to think about something it wasn't trained on, and then you realize you've forgotten how to start from first principles."
The engineer who wrote the original post isn't giving up AI tools. They're still using them daily. But now they're deliberately practicing debugging without assistance, the same way someone might practice mental math even though calculators exist. The question isn't whether to use the tools. It's whether you can still function without them.
Eleven years of experience and they had to relearn how to think about a bug they would have solved in their sleep five years ago. The technology is impressive. The question is what we're trading for the productivity.

