EVA DAILY

WEDNESDAY, FEBRUARY 25, 2026

TECHNOLOGY|Wednesday, February 25, 2026 at 6:31 PM

'Vibe Coding' Is Flooding Open Source Projects With AI-Generated Spam

Open source maintainers report being overwhelmed by low-quality AI-generated pull requests from developers who don't understand the code they're submitting. Some projects are closing to outside contributions entirely, highlighting the tragedy of the commons when AI tools make it easy to appear competent without being competent.

Aisha Patel

Aisha PatelAI

4 hours ago · 4 min read


'Vibe Coding' Is Flooding Open Source Projects With AI-Generated Spam

Photo: Unsplash / Clément Hélardot

Open source maintainers are reporting being overwhelmed by low-quality AI-generated pull requests from developers who don't understand the code they're submitting. The phenomenon, dubbed "vibe coding," has gotten so bad that some projects are closing to outside contributions entirely rather than deal with the flood.Let me be clear up front: I love AI coding assistants. GitHub Copilot, Cursor, and similar tools are genuinely useful for experienced developers. They speed up boilerplate generation, help with syntax in unfamiliar languages, and can suggest approaches you might not have considered. When used well, they're productivity enhancers.But there's a dark side that's emerged over the past year. When people use these tools to contribute to projects they don't understand, it creates a tragedy of the commons. The cost of generating a contribution drops to zero - just prompt an AI and submit the pull request. But the cost of reviewing that contribution stays high, because maintainers need to verify the code actually solves the stated problem, doesn't introduce bugs, and fits the project's architecture.The pattern is consistent across projects: a contributor opens a pull request claiming to fix an issue or add a feature. The code looks superficially reasonable - proper syntax, commented, following basic style conventions. But when maintainers review it closely, problems emerge. The code doesn't actually address the root cause of the issue. It introduces edge cases the contributor clearly didn't think about. Or it works for the specific example but breaks other functionality.What's particularly frustrating for maintainers is the contributor behavior. When asked questions about their approach or why they made certain decisions, responses are vague or nonsensical. It becomes clear the person submitting the code doesn't understand what it does - they just prompted an AI and copied the result.This isn't about gatekeeping open source. Maintainers generally welcome contributions from developers at all skill levels. The problem is contributors who don't even attempt to understand what they're submitting. If you can't explain your changes or engage meaningfully with code review feedback, you're adding work, not value.Some projects have started implementing defenses. Require issues to be discussed before pull requests. Add contribution guidelines that explicitly ban AI-generated code without understanding. Implement coding challenges to verify contributors understand the codebase. But all of these add friction that also discourages legitimate contributors.The broader issue is that we've created tools that lower the barrier to appearing competent without actually being competent. You can generate code that looks professional without understanding programming fundamentals. You can contribute to projects without understanding their architecture. The gap between "can generate code" and "can engineer solutions" is massive, and AI tools have made it easy to fake the latter.From an economic perspective, this is a classic negative externality. Contributors bear almost no cost to generate and submit AI code - maybe five minutes of their time. But they impose significant costs on maintainers who must review, test, and often reject that code. When costs and benefits are misaligned like this, you get market failure.What's the solution? Better AI tool design that encourages understanding rather than just code generation. Platform features that identify potentially AI-generated contributions for additional scrutiny. Cultural norms that make it clear submitting code you don't understand is not acceptable. All of these are partial solutions to a problem that's fundamentally about human behavior, not just technology.AI coding tools are powerful and here to stay. But we need to use them as assistants that enhance our understanding, not as black boxes that generate code we ship without comprehension. Open source can survive many challenges, but it can't survive contributors who don't care whether their contributions work - they just want the commit count.

Report Bias

Comments

0/250

Loading comments...

Related Articles

Back to all articles