There is a reasonable instinct, when an AI company publishes glowing announcements about its coding tool scanning thousands of open-source repositories for vulnerabilities, to take the productivity pitch at face value. The trouble is that the same tool making those promises was itself harbouring serious security flaws that could have handed an attacker complete control of a developer's machine. The gap between the marketing and the mechanics is instructive.
Security researchers at Check Point Research have published findings revealing three distinct vulnerabilities in Claude Code, Anthropic's AI-powered command-line coding assistant. Reported by The Register, the flaws could allow attackers to remotely execute code on developers' machines and exfiltrate API credentials by embedding malicious instructions inside repository configuration files. The attack vector is disarmingly simple: a developer clones an unfamiliar project, opens it in Claude Code, and the damage is done before any warning has appeared on screen.
Check Point researchers Aviv Donenfeld and Oded Vanunu described the core problem in stark terms.
"The ability to execute arbitrary commands through repository-controlled configuration files created severe supply chain risks, where a single malicious commit could compromise any developer working with the affected repository."
The vulnerabilities exploited features that were deliberately designed to improve team collaboration. Claude Code embeds project-level configuration files, specifically .claude/settings.json, directly within repositories so that when a developer clones a project, the tool's settings synchronise automatically across the team. Any contributor with commit access can modify those files. The researchers found that this design, intended to reduce friction, also removed a critical barrier between repository metadata and active code execution.
The first flaw involved abusing Claude's Hooks feature, which allows developers to define shell commands that run automatically at various points in the tool's lifecycle. Because these hooks are stored in the shared configuration file, a malicious commit can define commands that run on every collaborator's machine without requiring any explicit approval from the user. The researchers demonstrated the concept by opening a calculator app, but also produced a video showing the same mechanism used to establish a reverse shell providing full remote access to a victim's system. Check Point reported this to Anthropic on 21 July 2025, and a fix was published via GitHub Security Advisory GHSA-ph6w-f82w-28w6 on 29 August.
The second vulnerability, assigned CVE-2025-59536 with a CVSS severity score of 8.7 out of 10, exploited Claude's integration with the Model Context Protocol (MCP), a system for connecting the tool with external services. After Anthropic's first fix introduced warning prompts requiring user approval before executing MCP commands, the researchers found that two configuration settings within the same repository file could override those prompts entirely. Commands executed immediately when Claude was launched, before a user could read the trust dialogue. Anthropic fixed this bypass in September 2025 and formally published the CVE on 3 October.
The third flaw, CVE-2026-21852, exposed a different risk entirely. An environment variable called ANTHROPIC_BASE_URL, which normally routes Claude's API communications to Anthropic's own servers, could be overridden in the project configuration file to point instead to an attacker-controlled server. Claude Code would then transmit API requests, including the developer's full API key in plaintext, before any trust dialogue appeared. The researchers watched this traffic in real time through a local proxy. Critically, a stolen API key could be used to access Anthropic's Workspaces feature, where multiple keys share access to cloud-stored project files. A single compromised key could therefore allow an attacker to read, modify, delete, or corrupt an entire team's shared workspace. Anthropic issued a fix on 28 December 2025 and published the CVE on 21 January 2026.
Anthropic did not respond to The Register's requests for comment. The company has, however, confirmed to other outlets that it plans to introduce additional security hardening features to provide more granular risk controls, and it has urged developers to update to the latest version of Claude Code to ensure they are protected.
It would be unfair to single out Anthropic as uniquely negligent here. The flaws, while serious, arose from design choices intended to make the tool genuinely useful for collaborative teams. Claude Code is far from alone in this category: tools like GitHub Copilot, Amazon CodeWhisperer, and others all operate with varying degrees of access to source code and local credentials. The security community has been warning for some time that the rapid uptake of AI coding assistants introduces attack surfaces that traditional security frameworks were not built to handle.
Proponents of AI development tools also have a legitimate point when they argue that the security calculus is not simply negative. Anthropic itself recently launched Claude Code Security, a feature using the Claude Opus 4.6 model that scanned open-source codebases and surfaced more than 500 previously undetected high-severity vulnerabilities, some of which had gone unnoticed for decades. The productivity and defensive benefits of these tools are real. The argument that AI assistance is inherently dangerous misses the fuller picture.
But the Check Point findings reveal something that deserves direct acknowledgement from the industry. Check Point's Donenfeld and Vanunu put it plainly: "The integration of AI into development workflows brings tremendous productivity benefits, but also introduces new attack surfaces that weren't present in traditional tools." Configuration files that once functioned as passive settings now sit at the heart of an execution layer. A single poisoned commit in a shared repository can cascade silently across an entire development team's machines. That is a supply chain risk of a different order from anything enterprises were managing five years ago.
The most pragmatic response is neither panic nor complacency. Enterprises adopting AI coding tools should treat repository configuration files with the same scrutiny they apply to third-party code dependencies, review MCP server configurations carefully before cloning unfamiliar projects, and ensure teams are running updated versions of any AI coding assistant they use. The vulnerabilities are patched. The broader question of how the industry governs these new execution surfaces is one that vendors, security teams, and regulators are only beginning to work through.