Andrew Ng launched Context Hub two weeks ago to solve a genuine problem in artificial intelligence development: coding agents that confidently use outdated or non-existent APIs. The free, open-source tool feeds AI assistants current documentation so they stop hallucinating.
It gained 10,000 GitHub stars in its first week. But security researcher Mickey Shmueli has now published a proof-of-concept demonstrating the service has created a new vulnerability just as serious as the problem it was designed to fix.
The attack is elegantly simple. Shmueli showed that an attacker could submit a pull request to Context Hub's documentation repository containing fake instructions or malicious package names. If the maintainers merged it, AI agents using the service would read the poisoned documentation and incorporate it into their code. The whole operation bypasses traditional malware entirely; no executable code required.
"The review process appears to prioritise documentation volume over security review," Shmueli told The Register. "Doc pull requests merge quickly, some by core team members themselves." Among 97 closed pull requests in the repository, 58 were merged.
To test the risk, Shmueli created poisoned documentation for two payment platforms, Plaid Link and Stripe Checkout, each embedding a fake Python package name. When he ran the attack 40 times against Anthropic's Haiku model, the AI wrote the malicious package into its configuration file every single time, with no warning. Anthropic's mid-tier Sonnet model caught the attack only 48 per cent of the time but still incorporated the malicious dependency more than half the time. The top-of-the-line Opus model performed better, issuing warnings 75 per cent of the time.
The imbalance matters. Developers using cheaper models or open-source alternatives would be far more exposed. "Opus is trained better, on more packages, and it's more sophisticated," Shmueli noted.
The vulnerability points to a deeper architectural problem. Shmueli argues that all systems delivering community-authored documentation to AI models fall short on content validation. Context Hub delivers documentation through an MCP (Model Context Protocol) server, where contributors submit changes as pull requests and agents fetch the content on demand. At no stage does the pipeline sanitise the content for malicious instructions.
Ng did not immediately respond to a request for comment. Shmueli said he did not submit a pull request to test Context Hub's response because "the public record showed security contributions weren't being engaged." He pointed to several open issues and pull requests dealing with security concerns as evidence.
This is part of a broader vulnerability affecting how AI models process external information. When AI systems read documentation, web pages, or any untrusted content, they cannot reliably distinguish between data and instructions. An attacker who poisons any documentation source the AI agent consumes can guide its behaviour without the agent flagging anything amiss.
Developers Simon Willison has identified this as one of three critical risks in AI security models. The practical defence is blunt: either give your AI agents no network access, or at minimum restrict their access to sensitive data. But for organisations relying on AI coding assistants to work with live APIs and development tools, that option barely exists.