The Linux kernel project announced it will accept AI-generated code submissions, with the requirement that human contributors take full responsibility for any bugs. The policy shift acknowledges AI's role in development while maintaining human accountability.
Open source's most critical project just blessed AI code generation. But the "full responsibility" clause is doing heavy lifting.
Can you really be responsible for code you didn't fully write? If an AI tool generates 500 lines of kernel code and you review it, test it, and submit it - and then months later someone discovers a security vulnerability in that code - who's liable? The person who submitted it? The company that made the AI tool? The training data that taught the AI to write buggy code?
Linux kernel development has always operated on trust and reputation. Contributors build credibility over time. Their code gets merged because maintainers know their work is solid. But AI-generated code breaks that model. You're not evaluating the contributor's skills anymore - you're evaluating their ability to prompt an AI and review its output.
The practical reality is that developers are already using AI tools. GitHub Copilot, Cursor, and similar tools are standard in many workflows. Pretending they don't exist won't make them go away. The Linux kernel's policy is pragmatic: acknowledge reality, but make sure humans are accountable.
But here's what makes this legally interesting: if you're "fully responsible" for AI-generated code, you need to understand it at the same level you'd understand code you wrote yourself. That means line-by-line review. Comprehensive testing. Understanding edge cases and failure modes. For complex kernel code, that's often harder than just writing it from scratch.
So what you get is a permission structure that sounds permissive but is actually quite restrictive. Sure, you can submit AI-generated code - as long as you take the same responsibility you'd take for your own code. Which means you probably shouldn't be submitting code you don't fully understand.
The technology is impressive. The question is whether human review can actually keep up. Based on how code review already works in most projects, the answer is probably no.

