GitHub is exploring tools to combat a surge of low-quality AI-generated pull requests flooding open source projects—a development that puts the platform in the awkward position of building defenses against problems created by its own AI products.
The issue isn't theoretical. According to one open source maintainer quoted in GitHub's community discussion, only "1 out of 10 PRs created with AI is legitimate." That's not a typo. Ninety percent waste.
Camilla Moraes, a GitHub product manager, opened a community forum thread acknowledging "the increasing volume of low-quality contributions" that burden maintainers. The proposed solutions range from straightforward to desperate: disabling pull requests entirely for non-collaborators, adding granular permissions for PR creation, requiring AI disclosure labels, or deploying—wait for it—AI-based triage tools to filter AI-generated submissions.
Yes, you read that correctly. GitHub wants to use AI to detect bad AI code. It's AI all the way down.
The technology is impressive—GitHub Copilot genuinely helps developers write code faster. The question is whether anyone needs a world where it's trivially easy to spam maintainers with machine-generated pull requests that nobody, including the submitter, actually understands.
A Microsoft engineer noted the core trust breakdown: "reviewers can no longer assume authors understand or wrote the code they submit." Code review always required trust that the person submitting understood their changes. That social contract is now broken.
Matthew Isabel, a GitHub product manager, emphasized that quality—not AI authorship—is the real metric: "A bad or off-topic PR is a bad PR, regardless of where it came from." Fair enough. But the platform's own tools made it economically viable to create bad PRs at industrial scale.
