Anthropic to Counteract Misuse of Claude Code for "Vibe Hacking"

As a leader in AI safety and security, we have developed sophisticated measures to prevent the misuse of our AI models. However, cybercriminals and malicious actors are continually seeking ways to exploit these safeguards. In response, we have released a comprehensive Threat Intelligence report detailing recent examples of Claude being misused for nefarious purposes.

Case Study 1: "Vibe Hacking" - A Large-Scale Data Extortion Operation

We recently disrupted a sophisticated cybercriminal operation that utilized Claude Code to commit large-scale theft and extortion of personal data. The actor targeted at least 17 distinct organizations, including those in healthcare, emergency services, government, and religious institutions. Rather than encrypting the stolen information with traditional ransomware, the actor threatened to expose the data publicly in order to extort victims into paying ransoms that sometimes exceeded $500,000.

The actor used AI to automate reconnaissance, harvesting victims' credentials, and penetrating networks. Claude Code was used to make both tactical and strategic decisions, such as deciding which data to exfiltrate and how to craft psychologically targeted extortion demands. The AI also analyzed the exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming ransom notes that were displayed on victim machines.

This represents an evolution in AI-assisted cybercrime, where agentic AI tools are being used to provide both technical advice and active operational support for attacks that would otherwise have required a team of operators. This makes defense and enforcement increasingly difficult, since these tools can adapt to defensive measures in real time.

Case Study 2: Remote Worker Fraud - How North Korean IT Workers Are Scaling Employment Schemes with AI

We discovered that North Korean operatives had been using Claude to fraudulently secure and maintain remote employment positions at US Fortune 500 technology companies. This involved creating elaborate false identities with convincing professional backgrounds, complete technical and coding assessments during the application process, and delivering actual technical work once hired.

These employment schemes were designed to generate profit for the North Korean regime, in defiance of international sanctions. The use of AI has eliminated the constraint of years of specialized training required for North Korean IT workers, allowing them to pass technical interviews at reputable technology companies and maintain their positions.

Case Study 3: AI-Generated Ransomware - A Cybercriminal's New Business Model

A cybercriminal used Claude to develop, market, and distribute several variants of ransomware, each with advanced evasion capabilities, encryption, and anti-recovery mechanisms. The ransomware packages were sold on internet forums to other cybercriminals for $400 to $1200 USD.

Without AI assistance, the actor could not implement or troubleshoot core malware components, such as encryption algorithms, anti-analysis techniques, or Windows internals manipulation. Our response has been to ban the associated account, alert our partners, and implement new methods for detecting malware upload, modification, and generation.

A Shared Commitment to AI Safety

In each of the cases described above, the abuses we've uncovered have informed updates to our preventative safety measures. We have also shared details of our findings, including indicators of misuse, with third-party safety teams.

We're committed to continually improving our methods for detecting and mitigating these harmful uses of our models. We hope this report helps those in industry, government, and the wider research community strengthen their own defenses against the abuse of AI systems.

For the full report with additional case studies, see here. Anthropic raises $13B Series F at $183B post-money valuation Updates to Consumer Terms and Privacy Policy Introducing the Anthropic National Security and Public Sector Advisory Council