Anthropic spent $20,000 in API costs building a 100,000-line C compiler using autonomous AI agents. It can compile Linux 6.9, but some users report it can't compile a simple 'hello world' program without manually specifying include paths. This is the perfect example of AI hype versus reality.
Here's what they built: 16 parallel Claude Opus 4.6 instances working together to create a Rust-based C compiler from scratch over two weeks. The system consumed 2 billion input tokens and 140 million output tokens - approximately $20,000 in API costs. That's a fraction of what human developers would cost, and the technical achievement is genuinely impressive.
The compiler successfully builds bootable Linux 6.9 on x86, ARM, and RISC-V architectures. It can compile QEMU, FFmpeg, SQLite, PostgreSQL, and Redis. It achieves a 99% pass rate on GCC torture test suites. You can even run Doom on it, because of course you can - every programming project eventually becomes "can it run Doom?"
But then you look at the GitHub issues, and reality intrudes. Users trying to compile a simple hello world program hit errors. One user notes: "Works if you supply the correct include path(s)." Another points out: "Which you arguably shouldn't even have to do lmao."
That last comment captures something important. A C compiler that needs manual include paths for hello world isn't really a drop-in replacement for GCC. It's an impressive tech demo that highlights both how far AI has come and how far it still has to go.
Anthropic's own documentation is honest about the limitations. The compiler can't generate efficient 16-bit x86 code - it calls out to GCC for that. It doesn't have its own assembler and linker. The generated code is less efficient than GCC with all optimizations disabled. The Rust code quality is
