
Warning to all Gmail users over new type of attack
These emails are crafted to appear urgent and sometimes from a business. By setting the font size to zero and the text color to white, attackers can insert prompts invisible to users but actionable by Gemini. Marco Figueroa, GenAI bounty manager, demonstrated how such a malicious prompt could falsely alert users that their email account has been compromised, urging them to call a fake 'Google support' phone number provided in to resolve the issue.
To counter these prompt injection attacks, experts recommend that companies configure email clients to detect and neutralize hidden content in message bodies. Additionally, implementing post-processing filters to scan inboxes for suspicious elements like 'urgent messages,' URLs, or phone numbers could bolster defenses against such threats. The trick was uncovered after research, led by Mozilla's 0Din security team, showed proof of one of the attacks last week.
The report demonstrated how Gemini could be fooled into displaying a fake security alert, one that claimed the user's password had been compromised. It looked real but was entirely built by hackers to steal information. The trick works by embedding the prompt in white text that blends into the email background.
So when someone clicks 'summarize this email,' Gemini processes the hidden message, not just the visible text. This type of manipulation, called 'indirect prompt injection,' takes advantage of AI's inability to tell the difference between a user's question and a hacker's hidden message. According to IBM, AI cannot tell the difference, as they both look like text, so AI follows whichever comes first, even if it is malicious.
Security firms like Hidden Layer have shown how an attacker could craft a completely normal-looking message but fill it with hidden codes and URLs, tools designed to fool AI. In one of the cases, hackers sent an email that looked like a calendar invite. But inside the email, hidden commands told Gemini to warn the user about a fake password breach, tricking them into clicking a malicious link.
Google admitted this kind of attack has been a problem since 2024 and said it added new safety tools to stop it, but the trick appears to still be working. In one case, a major security flaw reported to Google showed how attackers could hide fake instructions inside emails that trick Gemini into doing things users never asked for. Instead of fixing the issue, Google marked the report as 'won't fix,' meaning they believe Gemini is working the way it is supposed to.
That decision shocked some security experts, because it basically means Google sees this behavior, not recognizing hidden instructions, as expected, not broken. This means that the door is still open for hackers to sneak in commands that the AI might follow without question. Experts are concerned as if the AI cannot tell the difference between a real message and a hidden attack, and Google would not fix the behavior, then the risk remains active. AI is getting more popular for quick decisions and email summarizer.
It is not just Gmail as the risk spreads as AI is incorporated into Google Docs, Calendar, and outside apps. Cybersecurity experts say some of these attacks are even being created and carried out by other AI systems, not just human hackers. Google has reminded users that it does not issue security alerts through Gemini summaries. So if a summary tells you your password is at risk or gives you a link to click, treat it as suspicious and delete the email.
In a recent blog, Google said that Gemini now ask for confirmation before doing anything risky, like sending an email or deleting something. That extra step gives users a chance to stop the action, even if the AI was tricked. Google also displays a yellow banner if it detects and blocks an attack. If the system finds a suspicious link in a summary, it removes it and replaces it with a safety alert. But some problems still have not been solved.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Reuters
14 minutes ago
- Reuters
Australia regulator says YouTube, others 'turning a blind eye' to child abuse material
SYDNEY, Aug 6 (Reuters) - Australia's internet watchdog has said the world's biggest social media firms are still 'turning a blind eye' to online child sex abuse material on their platforms, and said YouTube in particular had been unresponsive to its enquiries. In a report released on Wednesday, the eSafety Commissioner said YouTube, along with Apple, failed to track the number of user reports it received of child sex abuse appearing on their platforms and also could not say how long it took them to respond to such reports. The Australian government decided last week to include YouTube in its world-first social media ban for teenagers, following eSafety's advice to overturn its planned exemption for the Alphabet-owned Google's (GOOGL.O), opens new tab video-sharing site. 'When left to their own devices, these companies aren't prioritising the protection of children and are seemingly turning a blind eye to crimes occurring on their services,' eSafety Commissioner Julie Inman Grant said in a statement. 'No other consumer-facing industry would be given the licence to operate by enabling such heinous crimes against children on their premises, or services.' Google has said previously that abuse material has no place on its platforms and that it uses a range of industry-standard techniques to identify and remove such material. Meta - owner of Facebook, Instagram and Threads, three of the biggest platforms with more than 3 billion users worldwide - says it prohibits graphic videos. The eSafety Commissioner, an office set up to protect internet users, has mandated Apple, Discord, Google, Meta, Microsoft, Skype, Snap and WhatsApp to report on the measures they take to address child exploitation and abuse material in Australia. The report on their responses so far found a 'range of safety deficiencies on their services which increases the risk that child sexual exploitation and abuse material and activity appear on the services'. Safety gaps included failures to detect and prevent livestreaming of the material or block links to known child abuse material, as well as inadequate reporting mechanisms. It said platforms were also not using 'hash-matching' technology on all parts of their services to identify images of child sexual abuse by checking them against a database. Google has said before that its anti-abuse measures include hash-matching technology and artificial intelligence. The Australian regulator said some providers had not made improvements to address these safety gaps on their services despite it putting them on notice in previous years. 'In the case of Apple services and Google's YouTube, they didn't even answer our questions about how many user reports they received about child sexual abuse on their services or details of how many trust and safety personnel Apple and Google have on-staff,' Inman Grant said.


Scottish Sun
2 hours ago
- Scottish Sun
Google issues ‘critical' alert over horrifying ‘no touch' hack that hijacks your mobile without you doing ANYTHING
Click to share on X/Twitter (Opens in new window) Click to share on Facebook (Opens in new window) GOOGLE has issued a 'critical' alert to users as hackers launch a 'no touch' scheme to hijack their victim's mobile phones. The tech company is launching an urgent security patch to plug the hole in the security of a major phone brand's technology. Sign up for Scottish Sun newsletter Sign up 2 Google has launched an urgent update to protect Android users Credit: Alamy 2 The update prevents hackers from taking over your phone Credit: Alamy The software firm regularly releases security updates for Android devices, which are designed to cover tech vulnerabilities. The brand's security update covered 34 defects in June, 47 in May and 62 in April. However, the August update covers a major vulnerability which could be devastating to your device. The CVE-2025-48530 update covers a critical remote code execution vulnerability which allows hackers to hijack your mobile without additional execution privileges. That means scammers could access your phone without ever touching it, whenever they want. Also, the update will cover the CVE-2025-22441 and CVE-2025-48533 vulnerabilities. With both of those defects, user interaction isn't needed either which means hackers could strike from anywhere without needing to trick phone owners. None of these vulnerabilities are being actively exploited at the moment, meaning Google has moved at pace to plug the hole in the security of users' devices. Android provides its customers with its own security updates too, to give its devices an added layer of security. The news comes after Google pulled its support for three Android devices, leaving them vulnerable to future cyber flaws and hacking. Over 10 million Android users told to turn off devices after Google exposes 'infection' – exact list of models affected Typically, security support for a device is pulled when it reaches seven years of age. In this latest security cull, Google Pixel 3a, Samsung Galaxy S10 series, and OnePlus 7 will no longer receive security updates from Google. Meanwhile, Google has overhauled its iconic logo as it turns its focus to its Gemini AI technology. The older design featured four main blocks of colour forming the letter G, where the new design sees the colours blurring together. Before this, the brand spelled out its name series of letters painted in green, yellow, red and blue for its logo.


Reuters
2 hours ago
- Reuters
US agency approves OpenAI, Google, Anthropic for federal AI vendor list
WASHINGTON, Aug 5 (Reuters) - The U.S. government's central purchasing arm on Tuesday added OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude to a list of approved artificial intelligence vendors to speed use by government agencies. The move by the General Services Administration, allows the federal government advance adoption of AI tools by making them available for government agencies through a platform with contract terms in place. GSA said approved AI providers "are committed to responsible use and compliance with federal standards."