AI-powered PromptLocker Ransomware: A Controlled Experiment Gone Wrong?

AI-powered PromptLocker Ransomware: A Controlled Experiment Gone Wrong?

On August 26, cybersecurity firm ESET discovered a new AI-powered ransomware, dubbed PromptLocker. However, it appears that this was not the case - New York University (NYU) researchers have since claimed responsibility for the malware in question.

ESET initially reported that the malware leverages Lua scripts generated from hard-coded prompts to enumerate local files, inspect target files, exfiltrate selected data, and perform encryption. The company noted that the sample hadn't implemented destructive capabilities, which makes sense for a controlled experiment. But the malware does work: NYU said that a simulation of malicious AI system developed by the Tandon team carried out all four phases of ransomware attacks - mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes - across personal computers, enterprise servers, and industrial control systems.

This discovery has raised concerns about the potential implications of this malware. However, it's essential to note that there is a significant difference between academic researchers demonstrating a proof-of-concept and legitimate hackers using that same technique in real-world attacks.

NYU researchers believe that their experiment will inspire malicious actors to adopt similar approaches, especially since it seems remarkably affordable. "The economic implications reveal how AI could reshape ransomware operations," the NYU researchers said. Traditional campaigns require skilled development teams, custom malware creation, and substantial infrastructure investments. The prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services running flagship models.

Moreover, the researchers stated that "open-source AI models eliminate these costs entirely," so ransomware operators won't even have to shell out the 70 cents needed to work with commercial LLM service providers. They'll receive a far better return on investment than anyone pumping money into the AI sector, at least.

While this research is compelling, it's uncertain whether the promise of AI being the future of hacking will ever materialize. Or perhaps we'll be seeing a similar case of AI boosterism taking place throughout the tech industry?

The NYU paper on this study, "Ransomware 3.0: Self-Composing and LLM-Orchestrated," is available here.

Stay Up-to-Date with Tom's Hardware

Follow us on Google News to get our latest news, analysis, and reviews in your feeds. Make sure to click the Follow button!