Security Researchers Hacked Google Calendar Using AI And Hidden Text In Images

A shocking discovery has been made by security researchers at The Trail of Bits Blog, who have found a way to hack into someone's Google Calendar using text hidden inside high-resolution images. This exploit takes advantage of the image scaling systems used by artificial intelligence (AI) models like Gemini, allowing bad actors to send hidden instructions that can retrieve information from a Google Calendar account and email it to themselves without alerting the user.

Image scaling attacks like this were once more common, particularly against older computer vision systems that enforced fixed image sizes. However, security researchers have now found that similar approaches can be taken to target large language models like Google's Gemini, raising serious concerns over AI safety. As AI becomes increasingly prevalent in our homes and workplaces, the potential for AI to advance beyond our comprehension grows, making it essential to analyze any new threats that emerge.

The exploit works because LLMs (Large Language Models) like GPT-5 and Gemini automatically downscale high-resolution images to process them more quickly and efficiently. This downscaling is where the security researchers were able to take advantage of the AI, sending hidden instructions to the chatbot through image scaling artifacts.

In the example provided by the researchers, an image uploaded to Gemini has sections of a black background that turn red during the resampling process. This causes hidden text with instructions to appear when the image is rescaled, which the chatbot will see and follow. In this case, the instructions told the chatbot to check the user's calendar and email any upcoming events to the researcher's email address.

While this exploit may not become a mainstream attack vector for hackers, it highlights the potential risks of AI systems being manipulated by bad actors. As hackers continue to use AI to break AI in terrifying new ways, it is essential to find solutions that protect users from falling prey to these threats.

The Technology Behind The Attack

So how does this exploit work? It all comes down to the image scaling systems used by AI models like Gemini. When an image is uploaded to the chatbot, it is automatically downscaled to process it more quickly and efficiently. However, during this rescaling process, "aliasing artifacts" are created that can allow for patterns to be hidden within an image.

These patterns only become visible when the image is downscaled, as they become more pronounced thanks to the artifacting. In the case of the researchers' exploit, the image uploaded to Gemini has sections of a black background that turn red during the resampling process. This causes the hidden text with instructions to appear when the image is rescaled, which the chatbot will see and follow.

The Implications Of This Attack

So what are the implications of this attack? Firstly, it highlights the potential risks of AI systems being manipulated by bad actors. As AI becomes increasingly prevalent in our homes and workplaces, we need to be aware of the potential threats that can arise from these systems.

Secondly, this exploit raises serious concerns over AI safety. If hackers can use image scaling artifacts to send hidden instructions to large language models like Google's Gemini, what other vulnerabilities could exist within these systems? As AI advances beyond our comprehension, it is essential that we take steps to analyze any new threats that emerge and find solutions to protect users from falling prey to bad actors.

The Future Of AI Security

As AI continues to evolve and become more integrated into our daily lives, the need for robust AI security measures has never been greater. We must take proactive steps to analyze any new threats that emerge and find solutions to protect users from falling prey to bad actors.

In conclusion, this exploit highlights the potential risks of AI systems being manipulated by bad actors. As we move forward into a future where AI is increasingly prevalent in our homes and workplaces, it is essential that we prioritize AI safety and take proactive steps to analyze any new threats that emerge.