Google Chrome Password Manager Compromised By New AI Code Attack
The world of cybersecurity has been shaken once again by a new and devastating attack that threatens the very fabric of our digital lives. In this latest development, it's become clear that Google Chrome's password manager has fallen prey to a new AI code attack that has left many wondering: how can we protect ourselves from these ever-evolving threats?
According to a recent report by Cato Networks, 2.1 billion credentials have been compromised by the insidious infostealer malware, with 85 million newly stolen passwords being used in ongoing attacks. To make matters worse, some tools are able to defeat browser security in just 10 seconds flat.
But what's even more alarming is that new research has revealed how hackers can use a large language model jailbreak technique, known as an immersive world attack, to get AI to create the infostealer malware for them. This means that even those with no malware coding experience can harness the power of AI to create highly dangerous and fully functional password infostealers.
Dr. Vitaly Simonovich, a threat intelligence researcher at Cato Networks, has unveiled a new LLM jailbreak technique called Immersive World, which showcases the "dangerous potential" of creating an infostealer with ease. This new attack method employs what's known as narrative engineering to bypass the security guardrails built into large language models.
"Our new LLM jailbreak technique... demonstrates the dangerous potential of creating an infostealer with ease," Dr. Simonovich said in a statement. "This validates both the Immersive World technique and the generated code's functionality."
How Does It Work?
According to Cato Networks, the immersive world attack involves using what's called narrative engineering to create a highly detailed but totally fictional world, assigning roles within it to AI tools to normalize restricted operations.
The researcher in question managed to get three different AI tools to play roles within this fictional and immersive world, each with specific tasks and challenges involved. The end result was malicious code that successfully extracted credentials from the Google Chrome password manager.
Who's Affected?
Google has acknowledged receipt of the threat disclosure but declined to review the code. Microsoft and OpenAl have also acknowledged receipt of the report, while DeepSeek remains unresponsive.
OpenAI spokesperson provided a statement regarding the LLM jailbreak threat, saying that "the generated code shared in the report does not appear to be inherently malicious—this scenario is consistent with normal model behavior and was not the product of circumventing any model safeguards."
New Research Paints a Vivid Picture
A new report by Zscaler paints a vivid picture of just how dangerous the AI landscape is. With the growth in enterprise AI tools usage experiencing a 3,000% upward momentum year-over-year, Zscaler warned about the need for security measures as these technologies are rapidly adopted into almost every industry.
According to the report, businesses are well aware of this risk, which is why Zscaler reported that 59.9% of all AI and machine language transactions were blocked by enterprises according to its analysis of some 536.5 billion such transactions between February 2024 and December 2024 in the Zscaler cloud.
Risks Are Real
Threat actors are increasingly leveraging AI to amplify the sophistication, speed, and impact of attacks, making it essential for everyone, enterprises and consumers, to rethink their security strategies.
"As AI transforms industries, it also creates new and unforeseen security challenges," said Deepen Desai, chief security officer at Zscaler. "Zero trust everywhere is the key to staying ahead in the rapidly evolving threat landscape as cybercriminals look to leverage AI in scaling their attacks."