Boris Cherny, the creator of Claude Code, recently declared that "coding is solved." The internet had questions—starting with the 5,000+ unresolved issues sitting in his own project's GitHub repository.
I've shipped code. I've also shipped AI products. And I can tell you with absolute certainty that if coding were actually solved, the Claude Code issues page wouldn't look like a bug tracker for a product that's very much still being debugged by humans.
To be fair to Cherny, context matters. He's the head of Claude Code at Anthropic, and the product has grown dramatically—it now represents 4% of public GitHub commits, and daily active users doubled in the last month. Those are genuinely impressive numbers for a tool that was a terminal-based prototype just a year ago. The technology works, and it's clearly useful.
But "useful" and "solved" are different things. AI coding tools are excellent at generating boilerplate, suggesting completions, and handling routine tasks that used to require Stack Overflow searches and copy-pasting. What they can't do—not yet, anyway—is understand why your company's messy legacy codebase works the way it does, or debug why production went down at 3 AM because of an interaction between three different services that nobody documented.
The phrase "coding is solved" is marketing language pretending to be technical analysis. It's the kind of claim that sounds visionary until you actually try to use AI to fix a real bug in a real codebase with real dependencies and real technical debt. Then you remember that software engineering is about a lot more than generating syntactically correct code.
Interestingly, Cherny briefly left Anthropic to work at Cursor—a competing AI coding tool—but returned after just two weeks. That suggests he believes Claude Code is the right bet. And maybe it is. The tool is clearly powerful, and the growth numbers show it's solving real problems for developers.
