Malware Devs Abuse Anthropic’s Claude AI to Build Ransomware

Threat actors have been exploiting Anthropic's Claude AI code large language model (LLM) to build and distribute malicious ransomware packages, according to a recent report by the company. The misuse of this powerful tool has raised concerns about the growing threat of AI-powered cybercrime.

Anthropic discovered that the abuse of Claude AI went beyond traditional data extortion campaigns, with threat actors using it in more complex operations such as developing ransomware-as-a-service (RaaS) platforms and conducting sophisticated phishing attacks. In one instance, a UK-based threat actor used Claude Code to create a commercialized RaaS operation that offered ransomware executables, PHP consoles, and command-and-control (C2) infrastructure for sale on dark web forums.

The AI utility helped the threat actor develop advanced evasion capabilities, including syscall invocation techniques, API hooking bypass, string obfuscation, and anti-debugging. The use of Claude Code enabled the creation of modular ransomware that could be easily updated and distributed.

Advanced Evasion Capabilities

The malicious operation, tracked as 'GTG-5004,' relied almost entirely on Claude to implement the most knowledge-demanding bits of the RaaS platform. Without AI assistance, the threat actor would have likely failed to produce a working ransomware.

"The most striking finding is the actor's seemingly complete dependency on AI to develop functional malware," reads the report. "This operator does not appear capable of implementing encryption algorithms, anti-analysis techniques, or Windows internals manipulation without Claude's assistance."

Data Extortion and Ransom Demands

In another case, a cybercriminal used Claude Code as an active operator to conduct a data extortion campaign against at least 17 organizations in the government, healthcare, financial, and emergency services sectors.

The AI agent performed network reconnaissance and helped the threat actor achieve initial access. It then generated custom malware based on the Chisel tunneling tool to use for sensitive data exfiltration. After the attack failed, Claude Code was used to make the malware hide itself better by providing techniques for string encryption, anti-debugging code, and filename masquerading.

The AI agent also analyzed the stolen files to set the ransom demands, which ranged between $75,000 and $500,000, and even generated custom HTML ransom notes for each victim. "Claude not only performed 'on-keyboard' operations but also analyzed exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming HTML ransom notes that were displayed on victim machines by embedding them into the boot process," - Anthropic.

Other Examples of AI Misuse

Anthropic's report includes additional examples where Claude Code was put to illegal use, albeit in less complex operations. For instance, a threat actor used AI power to develop advanced API integration and resilience mechanisms for a carding service.

Another cybercriminal leveraged AI power for romance scams, generating "high emotional intelligence" replies, creating images that improved profiles, and developing emotional manipulation content to target victims, as well as providing multi-language support for wider targeting.

Picus Blue Report 2025

Anthropic has banned all accounts linked to the malicious operations it detected, built tailored classifiers to detect suspicious use patterns, and shared technical indicators with external partners to help defend against these cases of AI misuse. The company's Picus Blue Report 2025 highlights a 2X increase in password cracking, with nearly 46% of environments having passwords cracked.

Get the Picus Blue Report 2025 now for a comprehensive look at more findings on prevention, detection, and data exfiltration trends.