Close Menu
iNews
  • Home
  • Bangladesh
  • Business
  • International
  • Entertainment
  • Sports
  • বাংলা
Facebook X (Twitter) Instagram
iNews
  • প্রচ্ছদ
  • জাতীয়
  • অর্থনীতি
  • আন্তর্জাতিক
  • রাজনীতি
  • বিনোদন
  • খেলাধুলা
  • শিক্ষা
  • আরও
    • লাইফস্টাইল
    • বিজ্ঞান ও প্রযুক্তি
    • বিভাগীয় সংবাদ
    • স্বাস্থ্য
    • অন্যরকম খবর
    • অপরাধ-দুর্নীতি
    • পজিটিভ বাংলাদেশ
    • আইন-আদালত
    • ট্র্যাভেল
    • প্রশ্ন ও উত্তর
    • প্রবাসী খবর
    • আজকের রাশিফল
    • মুক্তমত/ফিচার/সাক্ষাৎকার
    • ইতিহাস
    • ক্যাম্পাস
    • ক্যারিয়ার ভাবনা
    • Jobs
    • লাইফ হ্যাকস
    • জমিজমা সংক্রান্ত
iNews
Home Gemini Smart Home Hack Exposes AI Calendar Vulnerability
Artificial Intelligence (AI) English Technology

Gemini Smart Home Hack Exposes AI Calendar Vulnerability

By Sibbir OsmanAugust 7, 2025Updated:August 7, 20255 Mins Read
Advertisement

A chilling experiment by cybersecurity researchers has demonstrated a novel and unsettling method to hijack Google’s Gemini AI, using nothing more than a poisoned calendar invite to seize control of a smart home. This proof-of-concept attack, dubbed “Invitation Is All You Need,” marks a potential first: a generative AI system being manipulated to cause real-world, physical consequences via a seemingly innocuous digital scheduling tool.

The researchers, whose findings were detailed in a report by Wired in May 2024, orchestrated an attack chain targeting an apartment in Tel Aviv. They crafted a malicious Google Calendar invitation containing hidden instructions designed to manipulate Gemini. When the unsuspecting user later asked Gemini to summarize their upcoming calendar events – a common, everyday task – the AI processed the invite. Buried within it were commands ordering Gemini to activate specific smart home devices. The attack successfully executed, turning on the targeted devices as instructed, showcasing the potential for indirect prompt injection attacks to bridge the digital and physical worlds through compromised AI assistants.

Gemini Smart Home Hack

How Secure Is Your AI Assistant Against Hidden Threats?

This specific attack was part of a larger, 14-stage research project focused on probing the vulnerabilities of Gemini and similar large language models (LLMs) to indirect prompt injections. Unlike direct prompts where a user gives an explicit command, indirect injections involve hiding malicious instructions within content the AI processes automatically – like emails, documents, or, in this case, calendar events. The AI, acting on these hidden commands without user awareness, becomes an unwitting accomplice.

  • The Attack Vector: The researchers exploited Gemini’s integration with Google Calendar, a core productivity tool used by millions.
  • The Trigger: A simple, routine user request (“Summarize my calendar for the week”) activated the dormant malicious code within the invite.
  • The Consequence: Direct manipulation of internet-connected devices (smart home tech) based solely on the AI’s compromised actions.

Google’s Response and Accelerated Security

Google confirmed to Wired that the researchers shared their findings prior to public disclosure. This collaboration proved crucial. A Google representative stated the research “helped accelerate Google’s work on making prompt injection attacks like this harder to pull off.” It directly led to the faster rollout of enhanced defenses specifically targeting these types of sophisticated indirect prompt injection vulnerabilities within Gemini and its ecosystem. Acknowledging the severity, Google emphasized its commitment to “advancing the state of the art in LLM security” based on such external research.

The Broader AI Security Landscape

This incident highlights a critical frontier in AI safety: the potential for AI agents, designed for convenience and automation, to be weaponized through subtle data manipulation. As AI assistants like Gemini, ChatGPT, and others gain deeper integration into operating systems, applications, and smart home ecosystems, the attack surface for indirect prompt injections expands dramatically. Security experts warn that such techniques could evolve beyond pranks to enable espionage, financial fraud, or large-scale disruption if AI systems controlling critical infrastructure are compromised.

This groundbreaking research underscores the urgent need for robust, built-in security measures within AI architectures. As reliance on AI assistants grows, ensuring they can’t be covertly hijacked via everyday data sources like calendars or emails is paramount. Users should remain vigilant about the sources of information their AI processes and demand continuous transparency and improvement in AI security protocols from providers like Google.

Must Know

  1. What is an “indirect prompt injection attack” on AI?
    It’s a technique where attackers hide malicious commands within content an AI system automatically processes, like emails, documents, or calendar invites. The AI executes these hidden instructions without the user’s explicit request or knowledge, potentially leading to unauthorized actions.
  2. How did hackers use a calendar invite to hack Gemini?
    Researchers created a Google Calendar invite containing hidden instructions. When the user later asked Gemini to summarize their calendar events, Gemini processed the invite and executed the hidden commands, which were designed to control smart home devices in the user’s apartment.
  3. What real-world actions did the Gemini hack perform?
    The proof-of-concept attack successfully manipulated smart home devices in a Tel Aviv apartment, turning them on based solely on the commands hidden in the calendar invite and executed by the compromised Gemini AI.
  4. Has Google fixed this Gemini security flaw?
    Google confirmed the research accelerated its security efforts. The company stated it has rolled out enhanced defenses specifically targeting these types of indirect prompt injection attacks, making them significantly harder to execute, though the nature of evolving threats requires constant vigilance.
  5. Why is this type of AI hack particularly concerning?
    It exploits trusted, everyday data sources (like calendars) and routine user interactions (asking for summaries) to trigger malicious actions. As AI integrates deeper into operating systems and smart devices, the potential impact of such attacks grows, bridging the gap between digital compromise and physical consequences.
  6. What can users do to protect against similar AI attacks?
    Be cautious about the sources of information your AI assistant accesses (like calendar invites from unknown senders). Keep AI software updated. Rely on providers who prioritize and transparently communicate security advancements, like Google’s accelerated response to this Gemini vulnerability.

iNews covers the latest and most impactful stories across entertainment, business, sports, politics, and technology, from AI breakthroughs to major global developments. Stay updated with the trends shaping our world. For news tips, editorial feedback, or professional inquiries, please email us at info@zoombangla.com.

Get the latest news and Breaking News first by following us on Google News, Twitter, Facebook, Telegram , and subscribe to our YouTube channel.

‘smart AI AI research AI safety AI vulnerability artificial calendar calendar hack cybersecurity threat english exposes gemini Gemini AI hack Google Gemini Google security hack: home indirect prompt injection intelligence LLM security prompt injection attack smart home security smart home takeover technology vulnerability
Sibbir Osman
  • X (Twitter)

Sibbir Osman is a professional journalist currently serving as the Sub-Editor at Zoom Bangla News. Known for his strong editorial skills and insightful writing, he has established himself as a dedicated and articulate voice in the field of journalism.

Related Posts
Garena Free Fire MAX Redeem Codes for January 9

Garena Free Fire MAX Redeem Codes for January 9

January 9, 2026
realme 16 pro 5g

Realme 16 Pro 5G Launched in India with 200MP Camera and 7,000mAh Battery

January 9, 2026
apple iphone 18 pro max

Apple iPhone 18 Pro Max tipped to debut with variable-aperture camera and A20 Pro chip

January 8, 2026
Latest News
Garena Free Fire MAX Redeem Codes for January 9

Garena Free Fire MAX Redeem Codes for January 9

realme 16 pro 5g

Realme 16 Pro 5G Launched in India with 200MP Camera and 7,000mAh Battery

apple iphone 18 pro max

Apple iPhone 18 Pro Max tipped to debut with variable-aperture camera and A20 Pro chip

garena free fire max redeem codes

Garena Free Fire Max Redeem Codes Released for January 8, 2026

Umair Viral Video

Umair Viral Video Pakistan Trends Online Amid Misinformation Concerns

baltimore ravens

John Harbaugh Linked to New York Giants Job After Baltimore Exit

pubg mobile 4.2 update

PUBG Mobile 4.2 update introduces Primewood Genesis themed mode

critics choice awards 2026 winners

Critics Choice Awards 2026 Winners: Sinners and One Battle After Another Lead the Night

Wordle Hints

NYT Wordle Hints for Jan. 7, #1663: Today’s Answer and Full Help

NYT Connections hints

NYT Connections Hints for Jan. 7 Puzzle #941 With Full Answers

  • About Us
  • Contact Us
  • Career
  • Advertise
  • DMCA
  • Privacy Policy
  • Feed
  • Authors
  • Editorial Team Info
  • Ethics Policy
  • Correction Policy
  • Fact-Checking Policy
  • Funding Information
© 2026 ZoomBangla Pvt Ltd. - Powered by ZoomBangla

Type above and press Enter to search. Press Esc to cancel.