Here's the uncomfortable question nobody wants to ask about AI coding assistants: what happens when the 95% of problems AI solves makes engineers worse at handling the 5% it can't?
A new analysis drawing on 1983 research by cognitive psychologist Lisanne Bainbridge argues that we're seeing exactly this pattern with Site Reliability Engineers. Bainbridge's work during the industrial revolution demonstrated a paradox: "the more you automate a process, the more critical the human operator becomes during rare moments automation fails." Automation was supposed to eliminate human involvement, but it actually made human intervention more essential while progressively degrading the skills needed to do it well.
We're watching this play out in real time. When AI handles routine incident response, engineers lose practice with the fundamentals. Then when a novel problem emerges that the AI hasn't seen before—which is precisely when deep expertise matters most—the humans are rusty at the skills they need.
The evidence is showing up in adjacent fields. Medical endoscopists using AI for polyp detection saw their performance drop from 28% to 22% without AI assistance after prolonged reliance on automation. Pilots depending heavily on autopilot showed measurable declines in situational awareness and manual flying skills, prompting FAA mandates for manual flight practice.
The author of the analysis admits their own instinct now defaults to offloading cognitive work to AI, leaving them with diminished independent problem-solving ability. That's not a hypothetical concern—it's happening to people who are actively aware of the problem and still experiencing it.
Here's what makes this genuinely difficult: the productivity gains from AI assistance are real. Code review goes faster, documentation writes itself, routine bugs get caught automatically. Those efficiency improvements compound into real value. The question is whether we're trading short-term productivity for long-term skill degradation in ways we won't fully understand until the next crisis hits and the AI doesn't know how to handle it.
The technology is impressive. The question is whether we're building a generation of engineers who are great at prompting AI but increasingly unable to solve hard problems independently.
