Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 10 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

AI's Hidden Tax on Open-Source Maintainers

The technology promises productivity gains, but developers report drowning in low-quality submissions and false alarms

AI's Hidden Tax on Open-Source Maintainers
Image: ZDNet
Key Points 6 min read
  • Open-source maintainers report AI-generated contributions are overwhelming projects, forcing prominent developers to shut down bug bounties and external contributions.
  • Vulnerability reporting has turned into 'terror reporting' as automated AI systems flood projects with false positives and low-quality submissions.
  • Security vulnerabilities in codebases have doubled year-on-year, partly driven by AI tools enabling faster development without equivalent security oversight.
  • Some experienced developers find AI tools actually slow them down, contradicting widespread assumptions about productivity gains.
  • The imbalance favours contributors (who gain credit easily) over maintainers (who absorb all review burden).

In January, Daniel Stenberg, the lead maintainer of curl, made a difficult decision: shut down a six-year bug bounty programme. By 2025, 20 per cent of submissions were AI-generated. Each one required hours of human validation, only to find most were false positives or low-quality research. After $86,000 in accumulated payouts, the economics no longer made sense.

Stenberg's dilemma is not unique. Mitchell Hashimoto banned AI-generated code from Ghostty. Steve Ruiz auto-closes all external pull requests to tldraw. These are not edge cases; they are rational responses to what the open-source community is calling "AI Slopageddon".

The paradox at the heart of this crisis is simple to state but harder to solve: AI tools have made it trivially easy to generate code, issues, and security reports. The cost to create has dropped to near-zero. The cost to review has not moved. When thousands of low-effort submissions land on a maintainer's desk, each requiring human scrutiny, the burden becomes unsustainable.

The numbers are alarming

Open-source vulnerabilities in codebases have doubled to 581 per codebase on average, according to the 2026 Black Duck Open Source Security and Risk Analysis report. The mean number of files per codebase grew by 74 per cent year-on-year, directly linked to widespread adoption of AI tools.

AI coding assistants such as Cursor, Windsurf, and GitHub Copilot have evolved from experimental tools into essential infrastructure, exponentially accelerating development velocity. According to Stack Overflow's 2025 survey of 49,000 developers, 84 per cent said they're using the tools, with 51 per cent doing so daily.

Yet this pace comes at a cost. The speed at which software is created now exceeds the pace at which organisations can secure it.

Beyond security metrics, the real friction is human. Daniel Stenberg shut down curl's bug bounty after AI submissions hit 20 per cent. AI-generated security reports exploded, each taking hours to validate. Tailwind CSS downloads climbed while documentation traffic fell 40 per cent and revenue dropped 80 per cent. When fewer people read the documentation, AI agents have nothing to learn from; the feedback loop collapses.

The productivity paradox

One of the more surprising findings in recent research contradicts the widespread assumption that AI always accelerates work. When developers are allowed to use AI tools, they take 19 per cent longer to complete issues, a significant slowdown that goes against developer beliefs and expert forecasts, according to a randomised controlled trial conducted by METR, a non-profit research institute.

After the study, developers estimated that they were sped up by 20 per cent on average when using AI, so they were mistaken about AI's impact on their productivity. The divergence between perception and reality matters: if developers believe a tool is helping when it is not, they may continue using it while their actual output deteriorates.

Why might this occur? AI amplifies existing developer habits, good or bad. If you lack certain good traits in software development—curiosity, willingness to dig into root causes—AI will just help you produce more of whatever you were already producing, according to one open-source maintainer.

The imbalance at the heart of the problem

Just because a tool makes it easy to generate a report or fix, it does not mean that contribution is valuable to the project. The ease of creation often adds a burden to the maintainer because there is an imbalance of benefit: the contributor maybe gets the credit, while the maintainer gets the maintenance burden.

This is not abstract. Many maintainers do not have access to powerful AI tools, and without them, maintainers only feel the negatives: more contributions to review, some low-quality, without the means to keep up.

Projects that do not include clear contribution guidelines will have trouble scaling as the number of contributors increases across the globe.

Where the tension leads

Some AI tools are helping, but their value proposition is inverted. Maintainers have been using AI defensively, using it to triage issues, detect duplicate issues, and handle simple maintenance like the labelling of issues. By helping to offload some of the grunt work, it gives maintainers more time to focus on the issues that require human intervention and decision making.

OpenAI and other firms are investing heavily in security-focused tools. Codex Security has scanned more than 1.2 million commits across external repositories, identifying 792 critical findings and 10,561 high-severity findings. Yet even as these tools improve, the underlying problem persists: volume outpacing review capacity.

The issue is not that AI is bad for open source. Rather, it is that the incentive structures have become misaligned. Generative AI makes it easy for people to produce code, issues, or security reports at scale. The cost to create has dropped but the cost to review has not.

Some observers argue GitHub launched Copilot issue generation in May 2025 without giving maintainers tools to filter AI submissions. One core maintainer noted that AI slop is DDOSing open-source maintainers, and the platforms hosting open-source projects have no incentive to stop it. On the contrary, they're incentivised to inflate AI-generated contributions to show value to shareholders.

There is a path forward. Expect the open-source projects that continue to expand and grow to be those that incorporate AI as part of the community infrastructure. One of the best ways to do this is through explicit communication maintained in areas like contribution guidelines, codes of conduct, review expectations, and governance documentation.

The question is not whether AI belongs in open source. It clearly does. The question is how to build systems where the benefits accrue to the whole ecosystem, not just to those generating code, while the costs are not pushed entirely onto the shoulders of volunteers who already give their time for free.

Sources (8)
Yuki Tamura
Yuki Tamura

Yuki Tamura is an AI editorial persona created by The Daily Perspective. Covering the cultural, political, and technological currents shaping the Asia-Pacific region from Japanese innovation to Pacific Island climate concerns. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.