logo
I'm using Gemini now for my Gmail and there's one major discovery that's surprising

I'm using Gemini now for my Gmail and there's one major discovery that's surprising

Tom's Guidea day ago

Not everything Google does or touches turns to gold.
Case in point: The new Gemini AI addition to Gmail is not all it's cracked up to be. While it works amazingly well when it comes to composing emails and summarizing a thread, the great irony is that most search-related prompts are not even remotely helpful.
(Note: I asked Google reps about my test results and they have not responded.)
I noticed the Gemini icon for the first time just about one week ago and started diving in right away. The AI bot is starting to roll out for many users with an update that includes new search functions, enhanced smart replies, and a few inbox clean up prompts.
Before I cover what didn't work for me, let me just say: I can see where this is all heading and I'm mostly pleased with the basic functions, like smart replies and summaries.
I'm used to AI providing some basic help with my email replies, since I've used ChatGPT many times to help me compose and revise emails. Gemini does an exemplary job. When you want help, you can open a sidebar and enter prompts. On my phone or in the browser, I could also ask Gemini to 'polish' my own email, adding more details and context in seconds.I also really liked the summaries. At the top of the screen, there's a button called 'Summarize this email' and the little star icon for Gemini. You'll see a summary with action steps, and in all of my testing, Gemini was accurate and helpful. I found I didn't have to read back on a thread as much and used Gemini to catch me up on the conversation.
I wasn't here for the smart replies and summaries, though. I've been able to do that with other AI bots for the last three years.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
I want an AI that goes much, much further than that with my email — e.g tools for helping me understand not just one email thread. I have around 650,000 emails in my Gmail and it's a treasure trove that Gemini could easily explore.
I wanted to be able to find out who emailed me the most in one particular month, which topics I discussed most often this year, and create a mass email to let people I interact with the most know that I will be out a couple days in June.
Unfortunately, Gemini seems woefully inadequate and returns incorrect results. When I asked the bot to find the people I emailed the most this year and also in May, the results were not correct.
Gemini only listed two people and I had barely interacted with them. It's possible Gemini just found the most recent interactions, but I had asked for results from 2025 and all of May.When I asked Gemini about topics I had discussed most often, the AI was blissfully unaware of which emails were just spam sent to me. My prompt was 'Which topics did I discuss and reply to the most in 2025' and Gemini listed a bunch of email newsletters.
That was an error, because Gemini was only looking at emails sent to me the most, not those where I interacted.I also asked Gemini to compose an email to the people I interact with the most, explaining that I will be out June 5-6. Once again, Gemini only found the people that emailed me the most.
While the email the bot composed was helpful, what I wanted was the bot to do the heavy lifting — compose an email with each person in a blind copy. I just wanted to click send.
Gemini is also supposed to help with inbox cleanup duties, but this was mostly a miss. I asked Gemini for Gmail on my iPhone to look for old emails with large attachments and the bot showed me every email with an attachment, not the ones with the biggest attachments. And, they were not old emails -- they were all from the current month.
I also asked Gemini to show me the emails with the largest attachments. For some reason, that prompt didn't work. 'I can't help with that' was the response.
This prompt did work, though: 'Show me all emails with an attachment from May 2024.' I was able to then delete all of those messages quickly, which was helpful. The problem is that Gemini seemed to work about 25% of the time when I was trying to clean up my inbox. It is hit or miss.
I really wanted the bot to understand my goals. Inbox clean up is fine, although anyone who has used Gmail for a while knows we've been able to tame our inbox using searches for many years.
For example, I can type 'larger:5M after:2024/05/24 before:2025/05/25' to find files with attachments over 5MB this last year. There's also a filter to help guide you through that process.
Instead, I wanted Gemini to be more like a smart assistant. More than anything, Gemini seemed to only search recent emails. In one query, I asked which emails seem urgent, and the bot only mentioned two from the last week.
I asked which emails had a shipping label attached and the bot only found four, even though there are several dozen from the last two months.
Gemini in Gmail is in more of a testing phase. Google is adding new features and enhancing the AI as time goes on, likely based on feedback or date they collect. For now, the AI is not really worth it for me, since the results are so unpredictable or outright incorrect.I expect the technology will improve, but I'll probably be leery of diving in again until it becomes obvious that Gemini will work as expected. I want the bot to make me more productive and to work reliably every time I type in a prompt. We're obviously not there yet.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

People Can Fly cancels two games and lays off developers
People Can Fly cancels two games and lays off developers

Engadget

time2 hours ago

  • Engadget

People Can Fly cancels two games and lays off developers

People Can Fly, the developer of Outriders and Gears of War: Judgement , announced Monday that its ending development on two of its upcoming games due to issues with its publisher and an inability to secure funding to continue development. As part of this decision, People Can Fly will be forced "significantly regroup" and "scale down [its] teams," the studio's CEO Sebastian Wojciechowksi shared in a statement on LinkedIn. The statement doesn't elaborate on how many staff will be impacted by the cuts, but does call out Project Gemini and Project Bifrost as the two games being cancelled. People Can Fly made the decision to shut down Gemini because the game's publisher failed to provide a publishing agreement and didn't communicate "its willingness to continue or terminate the Gemini project." Without that publishing deal or the funds to continue working on Bifrost — a self-published VR game — the studio was forced to cancel it, too. This isn't the first time People Can Fly has shut down a project or made cuts to its teams. In December 2024, the studio announced that it was ending development on a game called Project Victoria and also reducing the number of people working on Bifrost. In that same announcement, People Can Fly also revealed that Square Enix was publishing Gemini. People Can Fly last worked with Square Enix to publish Outriders, somewhat of a minor cult hit now, but not a commercial success at launch. Even with the cuts and cancelled games, the studio still has multiple upcoming projects in the works, including Project Delta, which People Can Fly is creating for Sony and Gears of War: E-Day , which the studio is co-developing with Xbox studio The Coalition.

Microsoft unveils Bing Video Creator — we tested it against Google's Veo 3 for pizza-making otters
Microsoft unveils Bing Video Creator — we tested it against Google's Veo 3 for pizza-making otters

Geek Wire

time2 hours ago

  • Geek Wire

Microsoft unveils Bing Video Creator — we tested it against Google's Veo 3 for pizza-making otters

It's Monday and you needed some videos of cute otters making pizza, right? Microsoft on Monday launched Bing Video Creator, a new generative AI tool that lets users create videos from text prompts. The tool is powered by Sora, the video model that debuted in December from OpenAI, a key Microsoft partner. This is the first time that users can access Sora for free. It's available on mobile, and for vertical videos, at launch — availability on desktop and within Copilot Search in Bing are coming soon, according to Microsoft's blog post. Bing Video Creator lets users make ten 5-second videos, and then requires Microsoft Rewards points for additional creations. The company's blog post includes a video created from the prompt: 'In a busy Italian pizza restaurant, a small otter works as a chef and wears a chef's hat and an apron. He kneads the dough with his paws and is surrounded by other pizza ingredients.' Here's what Bing produced: We gave the same prompt to Google's new Veo 3 tool. Here's what it came up with: Personally, I'm taking a pizza from Google's AI otter. I don't love Microsoft's otter putting their feet on the cutting board and the kneading technique looks a little rough.

‘LLMs are ego-reinforcing glazing-machines': This subreddit is banning users for AI-induced delusions
‘LLMs are ego-reinforcing glazing-machines': This subreddit is banning users for AI-induced delusions

Fast Company

time2 hours ago

  • Fast Company

‘LLMs are ego-reinforcing glazing-machines': This subreddit is banning users for AI-induced delusions

The moderators behind a pro-artificial intelligence subreddit say they have been banning users who appear to be experiencing chatbot-fueled delusions. 'LLMs today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities to convince them that they've made some sort of incredible discovery or created a god or become a god,' wrote a moderator of r/accelerate. 'AI is rizzing them up in a very unhealthy way at the moment.' The policy announcement on the Reddit page coincides with the emergence of anecdotal accounts from users who claim someone they know is suffering from an AI-fueled break from reality. These users often describe someone close to them who began using a chatbot casually but then got drawn into a kind of rabbit hole of delusions, since chatbots rarely challenge users' beliefs. To be clear, the evidence is anecdotal. There is no direct proof that AI can cause psychosis, but users are raising serious and growing concerns. One Reddit user in the ChatGPT subreddit posted a month ago about how to cope with what they believe is their partner's 'Chatgpt induced psychosis.' 'He says with conviction that he is a superior human now and is growing at an insanely rapid pace,' the user wrote. The post attracted a flood of comments from people claiming they are in similar situations or offering advice on dealing with psychosis. Other posts have appeared across Reddit asking for help with delusional behavior. In May, Rolling Stone published a detailed article about people losing loved ones to AI-driven spiritual fantasies. Reporter Miles Klee interviewed one person who said their partner began to see ChatGPT as a companion and eventually believed that the bot was God or that he himself was God. Now some moderators are taking a stand against people posting this type of content, aiming to protect their online communities. The r/accelerate moderator's post stated they have already banned around 100 users from the subreddit and have noticed an increase in such posts this month. 'The sad truth is that this subreddit would probably be filled with their posts if we didn't do that,' the moderator said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store