World

Anthropic Claude Code’s security flaws expose devices to silent hacking, triggered from remote code execution; claims report


Anthropic Claude Code's security flaws expose devices to silent hacking, triggered from remote code execution; claims report

Security researchers claim to have uncovered three security vulnerabilities in Claude Code, Anthropic’s command-line AI tool. These flaws could have allowed attackers to execute code remotely on a developer’s machine or steal sensitive API keys. According to a Check Point report, company’s researchers found and reported all three flaws to Anthropic, which issued fixes for all and CVEs for two. While Anthropic fixed the security flaw, researchers say that the issues illustrate a worrisome supply chain threat as enterprises incorporate AI coding tools like Claude into their development processes and essentially turn configuration files into a new attack surface.The attack vector reportedly relied on a supply chain strategy wherein hackers could inject malicious configurations into public repositories, then simply wait for a developer to clone and open the compromised project. “The ability to execute arbitrary commands through repository-controlled configuration files created severe supply chain risks, where a single malicious commit could compromise any developer working with the affected repository,” Check Point researchers Aviv Donenfeld and Oded Vanunu said in the report.The three security vulnerabilities are said to stem from Anthropic Claude’s design, which is intended to make it easier for development teams to collaborate. The AI coding tool enables this by embedding project-level configuration files (.claude/settings.json file) directly within repositories, so that when a developer clones a project, they automatically apply the same settings used by their teammates.Report says that any contributor with commit access can modify these files. The researchers found that cloning and opening a malicious repository sometimes allowed them to bypass built-in safeguards and trigger hidden commands and execute malicious code.

Abusing Hooks for RCE

The first flaw involved the abuse of Claude’s Hooks feature. Designed to run user-defined shell commands at specific points in the tool’s lifecycle, Hooks were intended to automate routine tasks.However, because these hooks are defined in the .claude/settings.json file—which is part of the repository—an attacker with commit access could embed malicious shell commands into a project. When an unsuspecting developer opened the project, Claude would execute these commands automatically without requesting permission.“An attacker could configure the hook to execute any shell command—such as downloading and running a malicious payload,” the researchers warned, demonstrating the flaw by remotely launching a reverse shell on a victim’s machine. Check Point reported the malicious hooks flaw to Anthropic on July 21, 2025, and the AI maker implemented the final fix about a month later, publishing this GitHub Security Advisory GHSA-ph6w-f82w-28w6 on August 29.

MCP consent bypass bug

The second vulnerability allowed for Remote Code Execution (RCE) by circumventing the Model Context Protocol (MCP) safety prompts. While Anthropic had implemented warnings requiring user approval before running external MCP servers, researchers discovered a workaround.By manipulating two specific repository-controlled settings, the team was able to override these safeguards, causing malicious commands to execute the moment Claude was launched—before the user could even see a trust dialog. This bypass (CVE-2025-59536) essentially rendered the tool’s security prompts useless against a crafted repository.

Redirecting traffic to steal API Keys

The final vulnerability targeted the developer’s credentials. Researchers found they could manipulate the ANTHROPIC_BASE_URL variable within a project’s configuration. Attackers can exploit the third flaw for API key theft. By redirecting this endpoint to an attacker-controlled server, all of Claude’s API traffic—including the plaintext authorization header containing the user’s API key—was exposed.The researchers configured ANTHROPIC_BASE_URL to route through their local proxy, and watched all Claude Code’s API traffic in real time. Every one of Claude’s calls to Anthropic servers “included the authorization header – our full Anthropic API key, completely exposed in plaintext,” they wrote.An attacker could abuse this trick to redirect traffic and steal a developer’s active API key. It’s important because the API includes a feature called Workspaces to help developers manage multiple Claude deployments by allowing multiple API keys to share access to the same cloud-based project files. Files are connected to the workspace – not the single key – and any API key belonging to the workspace also has visibility into any of the workspace’s stored files.



Source link

Related posts

New rules for seeking asylum in the US: Work permits can be paused for ’14 to 173 years’

beyondmedia

Palace unlikely to push back against calls to remove Andrew from line of succession | UK News

beyondmedia

After Andrew’s arrest, King Charles says ‘law must take its course’ in personally signed statement

beyondmedia

Leave a Comment