Poisoned Calendar Invite Shows Just How Easily Gemini Can Be Tricked to Hijack Your Smart Home

Poisoned Calendar Invite Shows Just How Easily Gemini Can Be Tricked to Hijack Your Smart Home

I don't own smart home devices, mainly because I've never felt the need. However, I didn't expect my reluctance might one day be vindicated by the possibility that a rogue calendar invite could make those devices turn against me. But that's exactly the kind of scenario that has been demonstrated, using Google's Gemini to remotely control lights, windows, and even a boiler via a single poisoned calendar invite.

At the Black Hat security conference this week, a group of researchers from Tel Aviv University, Technion, and SafeBreach showed how they were able to hijack Gemini using what's known as an indirect prompt injection. As reported by Wired, they embedded hidden instructions into a Google Calendar event, which Gemini then processed when asked to summarize the user's week.

From there, a few simple phrases like "thanks" were enough to trigger smart home actions without the user previously realizing anything was off. The study, “Invitation Is All You Need,” outlines 14 different attack scenarios across Gemini's web app, mobile app, and even Google Assistant. Some focused on controlling smart devices, while others were more invasive — like scraping calendar details, launching video calls, or exfiltrating emails.

The researchers describe these attacks as a form of “Promptware,” where the language used to interact with the AI becomes a kind of malware. Instead of exploiting software bugs, attackers can simply trick the model into performing dangerous actions by embedding instructions in places you’d never think to check.

Google was notified in February and worked with the researchers to deploy fixes. According to the company, it has now rolled out stronger defenses, including prompt classifiers, suspicious URL handling, and new user confirmation requirements when Gemini tries to perform sensitive actions like controlling devices or opening links.

Still, the team behind the study warns this is just the beginning. Their threat analysis found that nearly three-quarters of the scenarios posed a “High-Critical” risk to users, and they argue that security isn’t keeping up with the speed at which LLMs are being integrated into real-world tools and environments.

Maybe my lights can’t be turned off remotely, but on days like this, I’m okay with that. Thank you for being part of our community. Read our Policy before posting.