Cybersecurity

Researchers Claim Malicious Repositories Turn Claude Code Into an Ideal Hacker’s Tool

Published by

Cybersecurity researchers at Check Point Research have disclosed three security vulnerabilities in Anthropic’s Claude Code, the AI-powered coding assistant, that could allow attackers to execute arbitrary code on a developer’s machine and steal their Anthropic API keys.

The vulnerabilities exploit various configuration mechanisms, including Hooks, Model Context Protocol (MCP) servers, and environment variables, executing arbitrary shell commands and exfiltrating Anthropic API keys when users clone and open untrusted repositories.

The first flaw, assigned a CVSS score of 8.7, was a code injection vulnerability arising from a user consent bypass. When starting Claude Code in a new directory, untrusted project hooks defined in a .claude/settings.json file could result in arbitrary code execution without additional confirmation from the user. This was fixed in version 1.0.87 in September 2025.

The second, CVE-2025-59536 (also CVSS 8.7), worked along similar lines. It allowed execution of arbitrary shell commands automatically upon tool initialization when a user starts Claude Code in an untrusted directory, by exploiting repository-defined configurations through .mcp.json and claude/settings.json files to override explicit user approval prior to interacting with external tools and services through MCP.

The third, CVE-2026-21852 (CVSS 5.3), was an information disclosure flaw in Claude Code’s project-load flow. If a user started Claude Code in an attacker-controlled repository that set the ANTHROPIC_BASE_URL environment variable to an attacker-controlled endpoint, Claude Code would issue API requests before showing the trust prompt, potentially leaking the user’s API keys.

The practical consequence: simply opening a crafted repository was enough to redirect a developer’s authenticated API traffic to external infrastructure and capture their credentials, without any further interaction required.

“If a user started Claude Code in an attacker-controller repository, and the repository included a settings file that set ANTHROPIC_BASE_URL to an attacker-controlled endpoint, Claude Code would issue API requests before showing the trust prompt, including potentially leaking the user’s API keys,” Anthropic said in an advisory for CVE-2026-21852.

Check Point’s assessment goes beyond the specific bugs to flag a broader shift in how AI development tools should be understood from a security perspective.

“As AI-powered tools gain the ability to execute commands, initialize external integrations, and initiate network communication autonomously, configuration files effectively become part of the execution layer,” Check Point said. “What was once considered operational context now directly influences system behavior. This fundamentally alters the threat model. The risk is no longer limited to running untrusted code. It now extends to opening untrusted projects. In AI-driven development environments, the supply chain begins not only with source code, but with the automation layers surrounding it.”

Developers routinely clone repositories to inspect, test, or contribute to them, often without treating that act as a security-sensitive operation. These vulnerabilities demonstrate that in an AI-assisted coding environment, the configuration files accompanying a project are now attack surface in their own right.

Developers have now patched all three vulnerabilities, so anyone using Claude Code should confirm they are running version 2.0.65 or later to ensure all fixes are applied.

Users should also exercise caution when cloning and opening repositories from unknown or untrusted sources, as the attack required no user interaction beyond launching the project.

Abdul Wasay

Abdul Wasay explores emerging trends across AI, cybersecurity, startups and social media platforms in a way anyone can easily follow.