It looks like you’re ransoming data. Would you like some help?

A recent surge in AI-powered ransomware and extortion chatbots has left defenders on high alert, as cybercriminals increasingly leverage these tools to steal sensitive data and extort victim organizations.

These AI-powered threats are redefining the landscape of financially motivated cybercrime, with malicious actors using large language models (LLMs) to craft more realistic phishing emails, create extortion videos, and even deploy malware with unprecedented effectiveness.

The First Known AI-Powered Ransomware: PromptLock

ESET malware researchers Anton Cherepanov and Peter Strýček recently discovered what they called the "first known AI-powered ransomware," dubbed PromptLock.

"It demonstrates that these systems are sophisticated enough to deceive security experts into thinking they're real malware from attack groups," New York University engineering student and doctoral candidate Md Raz said.

Anthropic Warns of AI-Powered Data Extortion

Around the same time as ESET's malware hunters spotted PromptLock, Anthropic warned that a cybercrime crew used its Claude Code AI tool in a data extortion operation targeting 17 organizations.

The crims demanded ransoms ranging from $75,000 to $500,000 for the stolen data, and Anthropic responded by banning some of the offending accounts, adding a new classifier to its safety pipeline, and sharing information about the crims with partners.

The Threat of Agentic AI: Agents Replacing Affiliates

Cisco Talos' head of outreach Nick Biasini told The Register that malicious actors will soon leverage agentic AI to orchestrate and scale their criminal activities.

"If it is cheaper, easier, and more effective for them to spin up virtual agents that identify and contact prospective victims, they will likely do that," Biasini said.

Ari Redbord: AI-Powered Scams on the Rise

Ari Redbord, global head of policy at blockchain intelligence firm TRM Labs, testified before lawmakers that his company has documented a 456 percent jump in GenAI-enabled scams within the last year.

"What we see AI doing today is supercharging criminal activity that we've seen exist for some time," Redbord told the House Judiciary Subcommittee.

The Future of Ransomware: AI Agents Replacing Affiliates

Redbord warned that criminals, along with the rest of the world, are rapidly increasing the pace of their AI development.

"We're already seeing ransomware crews experiment with AI across various parts of their operations — maybe not full autonomous agents just yet, but definitely elements of automation and synthetic content being deployed at scale," Redbord said.

Real-World Examples: Global Group and FunkSec Ransomware

Global Group, a new ransomware-as-a-service operation that emerged in June, sends its victims a ransom note directing them to access a separate Tor-based negotiation portal where an AI chatbot interacts with victims.

"Once accessed, the victim is greeted by an AI-powered chatbot designed to automate communication and apply psychological pressure," Picus Security noted in a July report.

The AI integration in this case reduces the ransomware affiliates' workload and moves the negotiation process forward even without human operators.

The Rise of LLMs in Malware Development

Large language models can also help developers write and debug code faster, and that applies to malware development as well.

"FunkSec operators have reportedly provided source code to AI agents and published the generated output, enabling rapid development with minimal technical effort," LevelBlue Labs Director Fernando Martinez said.

A Conclusion: Defenders Must Adapt

The emergence of AI-powered ransomware and extortion chatbots has left defenders on high alert, and it's clear that cybercriminals are increasingly leveraging these tools to steal sensitive data and extort victim organizations.

As the threat landscape continues to evolve, defenders must adapt and stay ahead of the curve if they hope to protect themselves and their organizations from these emerging threats.