Open source maintainers are drowning. Not in legitimate security vulnerabilities, but in an avalanche of noise generated by artificial intelligence tools that can now produce plausible-sounding bug reports at scale.
The problem has become severe enough that seven major technology firms have announced $12.5 million in total grants to strengthen the security of the open source software ecosystem. The funding will be managed by Alpha-Omega and the Open Source Security Foundation (OpenSSF), trusted security initiatives within the Linux Foundation.
The challenge is straightforward: as the security landscape grows more complex, advances in AI are dramatically increasing the speed and scale of vulnerability discovery in open source software, with maintainers now facing an unprecedented influx of security findings, many of which are generated by automated systems, without the resources or tooling needed to triage and remediate them effectively.
The real-world toll is already visible. Daniel Stenberg, maintainer for the popular curl open-source project, shut down his bug bounty program after being inundated with slop, with fewer than 5 per cent of the submitted reports being legitimate in 2025. Stenberg wrote that the never-ending slop submissions take a serious mental toll to manage and sometimes also a long time to debunk, wasting time and energy while hampering maintainers' will to live.
The scale of the problem extends beyond individual projects. In 2025, the National Vulnerability Database had a backlog of roughly 30,000 CVE entries awaiting analysis, with nearly two-thirds of reported open source vulnerabilities lacking an NVD severity score. This matters because it leaves systems potentially unpatched and vulnerable while resources are squandered on false leads.
The Linux Foundation's announcement represents an attempt to build infrastructure that can handle the volume. According to Greg Kroah-Hartman of the Linux kernel project, grant funding alone is not going to help solve the problem that AI tools are causing today on open source security teams, but OpenSSF has the active resources needed to support numerous projects that will help these overworked maintainers with the triage and processing of the increased AI-generated security reports they are currently receiving.
Yet reasonable observers question whether the investment can adequately address the core issue. The funding will help maintainers stay ahead of a new generation of AI-driven threats, move security beyond vulnerability discovery to actually deploying fixes, and put advanced security tools directly into maintainers' hands, to turn a flood of AI-generated findings into fast action. But high-profile maintainers like Daniel Stenberg from curl have pushed back against what they view as illegitimate bug reports, and now with AI tools making it trivially easy to generate and submit potential vulnerability reports at scale, these maintainers face an even more overwhelming flood of reports to triage, many of which may be low-quality, duplicative, or simply incorrect.
The fundamental tension is this: the same AI tools that increase the speed of vulnerability discovery also lower the cost of filing bad reports to zero. In 2025, Alpha-Omega invested $5.8 million in 14 critical open source projects and completed over 60 security audits and engagements. The question is whether this new funding can scale faster than the problem multiplies.
Alpha-Omega project details on the latest funding commitment are available from the Linux Foundation.