# Google’s Gmail Warning: A New Wave of Threats Lurks in AI Upgrades
For over two billion Gmail users, a warning has been issued by Google to beware of a new wave of threats that exploit advancements in artificial intelligence (AI) upgrades. This alert comes as a result of a recent report from 0din, Mozilla's zero-day investigative network, which uncovered an attack on Google's AI-powered email client, Gemini for Workspace.
**The Threat: Indirect Prompt Injections**
The vulnerability in question involves "indirect prompt injections," where malicious instructions are hidden within external data sources and visible to the user's AI tools, but not to the user themselves. This type of attack takes advantage of the trust that users have in their AI-powered email clients, convincing them to perform actions that can compromise their security.
**A Confirmed Hack**
The latest proof of concept shows that a hacker was able to inject malicious prompts into an email, which when summarized using Gemini's recent AI upgrades, resulted in a phishing warning that looked as if it came from Google itself. The prompt was hidden using a white-on-white font, making it invisible to the user, but Gemini saw it just fine.
**The Impact**
This vulnerability highlights the risks associated with the increasing adoption of generative AI across various industries. As more governments, businesses, and individuals adopt these technologies, the potential for attacks like this becomes increasingly pertinent.
**How to Protect Yourself**
To avoid falling victim to these types of attacks, Gmail users need to be aware of a few key things:
* Ignore any Google warnings within AI summaries - it's not how Google issues user warnings. * Train users that Gemini summaries are informational, not authoritative security alerts. * Auto-isolate emails containing hidden elements with zero-width or white text.
**The Broader Threat**
0din warns that prompt injections are the new email macros and that until LLMs (Large Language Models) gain robust context-isolation, every piece of third-party text your model ingests is executable code. This means that much tighter controls need to be put in place to prevent such attacks.
**Conclusion**
The recent discovery of this vulnerability serves as a stark reminder of the risks associated with the increasing adoption of generative AI. As we continue to rely on these technologies, it's essential that we take steps to protect ourselves from potential threats like this. By being aware of the latest vulnerabilities and taking proactive measures to secure our data, we can minimize the risk of falling victim to these types of attacks.
Stay vigilant, and keep your devices and data safe!