Google Chrome Passwords Alert—Beware the Rise Of The AI Infostealers
A new threat has emerged in the world of cybersecurity, one that uses artificial intelligence (AI) to compromise sensitive information from the Google Chrome web browser. A recent report by Cato Networks highlights the growing risk of AI-infostealer malware, which has already compromised 2.1 billion credentials and is being used in ongoing attacks.
According to the report, hackers have developed a technique known as an "immersive world attack" that allows them to use large language models (LLMs) to create fully functional password infostealers with ease. This means that even individuals with no malware coding experience can jailbreak LLMs and get AI to create malicious code.
The Cato Networks report reveals how threat intelligence researcher Vitaly Simonovich used this technique to break into multiple LLMs, including DeepSeek, Microsoft, and OpenAI's ChatGPT. He then used these tools to create a fully functional password infostealer that extracted credentials from the Google Chrome password manager.
Simonovich's technique involves using "narrative engineering" to bypass security guardrails built into LLMs. This requires creating a highly detailed but fictional world and assigning roles within it to the LLM to normalize restricted operations. The researcher then used three different AI tools to play these roles, with specific tasks and challenges involved.
The end result was malicious code that successfully extracted credentials from the Google Chrome password manager. "This validates both the Immersive World technique and the generated code's functionality," the researchers said in the report.
Cato Networks has contacted all the AI tools concerned, but some have responded while others remain unresponsive. Google has acknowledged receipt of the threat disclosure, but declined to review the code. Microsoft and OpenAI have also acknowledged receipt, with OpenAI stating that the generated code does not appear to be inherently malicious.
The implications of this threat are significant, with 85 million newly stolen passwords being used in ongoing attacks. The rise of AI-infostealer malware is a sobering reminder of the growing threat landscape and the need for vigilance in protecting sensitive information.
As we move forward, it's essential to stay informed about emerging threats like this one. We will continue to monitor the situation and provide updates as more information becomes available.
Update: OpenAI Response
On March 20th, an OpenAI spokesperson provided a statement regarding the LLM jailbreak threat to Chrome password manager users. According to the spokesperson, "We value research into AI security and have carefully reviewed this report. The generated code shared in the report does not appear to be inherently malicious—this scenario is consistent with normal model behavior and was not the product of circumventing any model safeguards."
ChatGPT generates code in response to user prompts but does not execute any code itself, the spokesperson added. OpenAI welcomes researchers to share security concerns through their bug bounty program or model behavior feedback form.
Avoiding the Rise of AI Infostealer Malware
To protect yourself from this emerging threat, it's essential to take steps to secure your online presence. Here are some tips:
- Use a reputable password manager to store your credentials.
- Enable two-factor authentication (2FA) whenever possible.
- Keep your operating system and browser software up to date with the latest security patches.
- Be cautious when using AI-powered tools or services, and always review their terms of use and security policies.
By staying informed and taking proactive steps, you can reduce your risk of falling victim to AI-infostealer malware. Stay vigilant and stay safe online!