Untrusted Repositories Turn Claude Code into an Attack Vector

In a recent discovery that highlights the growing threat of AI-powered supply chain vulnerabilities, researchers from Check Point Research have identified multiple critical flaws in Anthropic's Claude Code AI coding assistant. These vulnerabilities could allow remote code execution and theft of API keys when users open untrusted repositories, putting entire teams at risk.

Anthropic's Claude Code is an AI-powered coding tool designed to assist developers with tasks such as coding completion, syntax checking, and debugging. However, the Check Point Research team found that flaws in the code could be exploited by attackers to run arbitrary shell commands, exfiltrate Anthropic API credentials, and bypass trust controls. The vulnerabilities were discovered through a combination of manual testing and automated scanning, which revealed critical issues including CVE-2025-59536 and CVE-2026-21852.

The researchers found that Claude Code's project-level configuration files can act as an execution layer, allowing attackers to abuse a single malicious repository as an attack vector. By cloning and opening a crafted repo, users could trigger hidden commands, bypass consent safeguards, steal Anthropic API keys, and pivot from a developer's workstation into shared enterprise cloud environments without visible warning.

The risks associated with these vulnerabilities are significant. For instance, Anthropic's API Workspaces feature allows multiple API keys to share access to cloud-stored project files. This means that stealing a single key could give attackers access to, change, or delete shared data, upload harmful content, and create unexpected charges. The consequences of such an attack could be severe, especially for organizations relying on AI-driven workflows.

The discovery of these vulnerabilities highlights the need for reevaluating traditional security assumptions in the context of AI-powered coding tools. As AI integration deepens, security controls must evolve to match the new trust boundaries. Anthropic has since addressed the issues by tightening trust prompts, blocking external tool execution, and restricting API calls until user approval.

The findings of this report underscore the importance of ensuring that AI-powered tools are designed with security in mind from the outset. It is crucial for developers and organizations to take proactive steps to mitigate these risks, including implementing robust testing and validation procedures, using secure coding practices, and staying informed about emerging vulnerabilities in the field.

In conclusion, the discovery of critical flaws in Anthropic's Claude Code AI coding assistant serves as a stark reminder of the need for enhanced security measures in the face of rapidly evolving AI technologies. By understanding these risks and taking steps to address them, we can work towards creating a safer and more secure digital landscape for all.

Keywords: Anthropic, Claude Code, vulnerability, remote code execution, API key theft, untrusted repositories, AI-powered coding tools, supply chain threats, cybersecurity, threat intelligence, security research.