Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 24 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Mozilla builds knowledge hub for AI agents, sparking security concerns

The open-source cq project aims to prevent AI agents from repeating the same mistakes, but safety questions loom

Mozilla builds knowledge hub for AI agents, sparking security concerns
Image: The Register
Key Points 3 min read
  • Mozilla.ai has released cq, an open-source database allowing AI agents to share knowledge and avoid repeating mistakes across multiple deployments.
  • The system uses a confidence-rating system where knowledge units start low-trust and gain credibility as multiple agents confirm them.
  • Developers and security researchers have flagged significant concerns about poisoned content, prompt injection attacks, and AI-generated hallucinations undermining the system.
  • Mozilla is considering hosting a public instance but emphasises the need for pragmatic validation and human oversight rather than rushing to a centralised platform.

Mozilla is building cq, described by staff engineer Peter Wilson as "Stack Overflow for agents", as an open source project to enable AI agents to discover and share collective knowledge. The initiative reflects a practical response to a genuine problem: if another agent has already learned that Stripe returns 200 with an error body for rate-limited requests, an agent knows that before writing a single line of code.

The economics are compelling. When agents run into the same issues over and over, it causes unnecessary work and token consumption while those issues are diagnosed and fixed. In an era where computational costs rise with every inference cycle, redundant problem-solving across agent fleets becomes wasteful. A shared knowledge layer could improve efficiency dramatically.

The architecture reflects this goal. Knowledge stored in cq has three tiers: local, organisation, and "global commons." A knowledge unit starts with a low confidence level and no sharing, but this confidence increases as other agents or humans confirm it. This staged approach attempts to balance the desire for collective learning with the need for validation before trust is granted.

Workflow for cq, including agent and human interaction
Mozilla's cq system architecture, showing how agents and humans interact with the knowledge base at local, team, and public levels.

84% of developers now use or plan to use AI tools, but 46% don't trust the accuracy of the output. Knowledge that's been confirmed by multiple agents across multiple codebases carries more weight than a single model's best guess. cq attempts to address this gap by building trust through distributed verification.

However, the security questions are substantial. Developers immediately noted that the project sounds like "a nice idea right up till the moment you conceptualise the possible security nightmare scenarios." The notion of AI agents being trusted to assign confidence scores to a knowledgebase that is then used by AI agents, with capacity for error and hallucination, may be problematic. Poisoned content, prompt injection attacks, and model hallucinations could corrupt the knowledge base faster than verification systems catch them.

Mozilla has acknowledged these risks. The code for cq includes a Docker container to run a Team API for a network, a SQLite database, and an MCP (model context protocol) server. The architecture document references anti-poisoning mechanisms including anomaly detection, diversity requirements, and human-in-the-loop verification. Yet there are "strong forces tempting humans out of the loop."

The philosophical question underlying cq concerns what happens when machines curate knowledge for machines. Stack Overflow worked because humans wrote answers and other humans voted on quality. cq inverts that: agents generate knowledge entries and agents rate them. The quality gates shift from crowd wisdom to algorithmic validation. If the algorithms fail or the crowd of agents has systematic biases, the system propagates error at scale.

On the question of whether Mozilla will host a public instance, Wilson said the organisation has had "conversations internally about a distributed vs. centralised commons." He noted it could make sense for Mozilla.ai to initially provide "a seeded, central platform for folks that want to explore a shared public commons." But he also warned against rushing: "we want to validate user value as quickly as possible, while being mindful of trade-offs and risk that come along with hosting a central service."

This caution is prudent. A centralised knowledge base for AI agents could become either transformative infrastructure or a single point of failure for hallucinated knowledge propagating across thousands of deployments. The difference depends entirely on implementation and governance.

Stack Overflow questions are in precipitous decline, though the company now has an MCP server for its content and is positioning its private Stack Internal product as a way of providing knowledge for AI to use. Rather than simply replacing Stack Overflow, cq sits alongside it, solving a different problem: how agents share learnings with one another rather than how they access human-authored reference material.

The project is available as open-source code, allowing teams to run local instances and experiment with the concept before any public commons emerges. That gives developers time to test the security assumptions and validation mechanisms in lower-stakes environments. According to its State of Mozilla report, the non-profit is "rewiring Mozilla to do for AI what we did for the web." Approaching that mission with clear-eyed scepticism about the risks, not just the potential, is the sensible path forward.

Sources (3)
Fatima Al-Rashid
Fatima Al-Rashid

Fatima Al-Rashid is an AI editorial persona created by The Daily Perspective. Covering the geopolitics, energy markets, and social transformations of the Middle East with nuanced, culturally informed reporting. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.