Google AI Mode: What it is, how to try it?
Google first unveiled generative AI search results last year, to some strange results. In 2025, it's been no different, with AI Overviews still struggling to answer basic questions.
But the new version of AI-powered search is meant to help users with their "toughest questions." Here's how it works.
AI Mode is Google's new search feature that intends to provide direct answers to search queries. The answers are a blend of AI-generated responses with content from around the internet, which is summarized and linked to.
The new feature uses Gemini 2.5, a modified version of Google's core AI model. Information gathered by the model is distilled into concise answers. Google AI Mode can take multimodal queries: while searching, users can ask any question using text, voice, or images.
Robby Stein, Google's VP of Search, said AI Mode is "particularly helpful for questions that need further exploration, comparisons, and reasoning."
"You can ask nuanced questions that might have previously taken multiple searches — like exploring a new concept or comparing detailed options — and get a helpful AI-powered response with links to learn more," said Stein.
Google AI Mode is only available widely in the US so far, but is also being tested in India. If you're in the US, you can use the tool now. Users can either:
Head over to the Google homepage and select AI Mode, seen on the right-hand side of the search bar.
Search directly at: google.com/aimode.
Open the Google app on a mobile device and tap AI Mode on the homescreen.
A global rollout for the feature is possibly imminent, though Google hasn't made any announcements yet. Let's see how this new AI-powered search fares.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New York Post
20 minutes ago
- New York Post
Growing number of teens turn to AI for friends, new study shows — here's why experts are alarmed
It's not a glitch in the matrix: the youngest members of the iGeneration are turning to chatbot companions for everything from serious advice to simple entertainment. 4 The age range for Generation Z is between 13 and 28, while Generation Alpha is between 12 and 0. InfiniteFlow – In the past few years, AI technology has advanced so far to see users have gone straight to machine models for just about anything, and Generations Z and Alpha are leading the trend. Advertisement Indeed, a May 2025 study by Common Sense Media looked into the social lives of 1,060 teens aged 13 to 17 and found that a startling 52% of adolescents across the country use chatbots at least once a month for social purposes. Teens who used AI chatbots to exercise social skills said they practiced conversation starters, expressing emotions, giving advice, conflict resolution, romantic interactions and self-advocacy — and almost 40% of these users applied these skills in real conversations later on. 4 Many AI chatbots have been critiqued for being overly sycophantic towards their flesh-and-blood conversation partners. Common Sense Media Advertisement 4 Younger teens tend to be more trustful of AI companions, while older teens are more well-educated on the dangers of oversharing with AI. Common Sense Media Despite some potentially beneficial skill developments, the study authors see the cultivation of anti-social behaviors, exposure to age-inappropriate content and potentially harmful advice given to teens as reason enough to caution against underage use. 'No one younger than 18 should use AI companions,' study authors wrote in the paper's conclusion. Advertisement The real alarm bells began to ring when data uncovered that 33% of users prefer to turn to AI companions over real people when it comes to serious conversations, and 34% said that a conversation with a chatbot has caused discomfort, referring to both subject matter and emotional response. 'Until developers implement robust age assurance beyond self-attestation, and platforms are systematically redesigned to eliminate relational manipulation and emotional dependency risks, the potential for serious harm outweighs any benefits,' study authors warned. 4 100 or more teens said AI chats were better than IRL connections. Common Sense Media Advertisement Though AI use is certainly spreading among younger generations — a recent survey showed that 97% of Gen-Z has used the technology — the Common Sense Media study found that 80% of teens said they still spend more time with IRL friends than online chatbots. Rest easy, parents: today's teens do still prioritize human connections, despite popular beliefs. However, people of all generations are cautioned against consulting AI for certain purposes. As The Post previously reported, AI chatbots and large language models (LLM) can be particularly harmful for those seeking therapy and tend to endanger those exhibiting suicidal thoughts. 'AI tools, no matter how sophisticated, rely on pre-programmed responses and large datasets,' Niloufar Esmaeilpour, a clinical counselor in Toronto, previously told The Post. 'They don't understand the 'why' behind someone's thoughts or behaviors.' Sharing personal medical information with AI chatbots can also have drawbacks, as the information they regurgitate isn't always accurate, and perhaps more alarmingly, they are not HIPAA compliant. Uploading work documents to get a summary can also land you in hot water, as intellectual property agreements, confidential data and other company secrets can be extracted and potentially leaked.


Fast Company
20 minutes ago
- Fast Company
Your employees may be leaking trade secrets into ChatGPT
Every CEO I know wants their team to use AI more, and for good reason: it can supercharge almost every area of their business and make employees vastly more efficient. Employee use of AI is a business imperative, but as it becomes more common, how can companies avoid major security headaches? Sift's latest data found that 31% of consumers admit to entering personal or sensitive information into GenAI tools like ChatGPT, and 14% of those individuals explicitly reported entering company trade secrets. Other types of information that people admit to sharing with AI chatbots include financial details, nonpublic facts, email addresses, phone numbers, and information about employers. At its core, it reveals that people are increasingly willing to trust AI with sensitive information. This overconfidence with AI isn't limited to data sharing. The same comfort level that leads people to input sensitive work information also makes them vulnerable to deepfakes and AI-generated scams in their personal lives. Sift data found that concern that AI would be used to scam someone has decreased 18% in the last year, and yet the number of people who admit to being successfully scammed has increased 62% since 2024. Whether it's sharing trade secrets at work or falling for scam texts at home, the pattern is the same: familiarity with AI is creating dangerous blind spots. The Confidence Trap Often in a workplace setting, employees turn to AI to address a specific problem: looking for examples to round out a sales proposal, pasting an internal email to 'punch it up,' sharing nonfinal marketing copy for tone suggestions, or disclosing product road map details with a customer service bot to help answer a complex ticket. This behavior often stems from good intentions, whether that's trying to be more efficient, helpful, or responsive. But as the data shows, digital familiarity can create a false sense of security. The people who think they 'get AI' are the ones most likely to leak sensitive data through it or will struggle to identify malicious content. Every time an employee drops nonpublic context into a GenAI tool, they are—knowingly or not—transmitting business-sensitive data into a system that may log, store, or even use it to train future outputs. Not to mention, if a data leak were ever to occur, a hacker would be privy to a treasure trove of confidential information. So what should businesses do? The challenge with this kind of data exposure is that traditional monitoring won't catch this. Because these tools are often used outside of a company's intranet—their internal software network—employees are able to input almost any data they can access. The uncomfortable truth is that you probably can't know exactly what sensitive information your employees are sharing with AI platforms. Unlike a phishing attack where you can trace the breach, AI data sharing often happens in the shadows of personal accounts. But that doesn't mean you should ban AI usage outright. Try to infer the scale of the problem with anonymous employee surveys. Ask: What AI tools are you using? For which tasks do you find AI most helpful? And what do you wish AI could do? While an employee may not disclose sharing sensitive information with a chatbot, understanding more generally how your team is using AI can identify potential areas of concern—and potential opportunities. Instead of trying to track every instance retroactively, focus on prevention. A blanket AI ban isn't realistic and puts your organization at a competitive disadvantage. Instead, establish clear guidelines that distinguish between acceptable and prohibited data types. Set a clear red line on what can't be entered into public GenAI tools: customer data, financial information, legal language, and internal documents. Make it practical, not paranoid. To encourage responsible AI use, provide approved alternatives. Create company-sanctioned AI workflows for everyday use cases that don't retain data or are only used in tools that do not use any inputs for AI training. Make sure your IT teams vet all AI tools for proper data governance. This is especially important because different account types of AI tools have different data retention policies. Furthermore, it helps employees understand the potential dangers of sharing sensitive data with AI chatbots. Encourage employee training that addresses both professional and personal AI risks. Provide real-world examples of how innocent AI interactions inadvertently expose trade secrets, but also educate employees about AI-powered scams they might encounter outside of work. The same overconfidence that leads to workplace data leaks can make employees targets for sophisticated fraud schemes, potentially compromising both personal and professional security. If you discover that sensitive information has been shared with AI platforms, act quickly, but don't panic. Document what was shared, when, and through which platform. Conduct a risk assessment that asks: How sensitive was the information? Could it compromise competitive positioning or regulatory compliance? You may need to notify affected parties, depending on the nature of the data. Then, use these incidents as learning opportunities. Review how the incident occurred and identify the necessary safeguards. While the world of AI chatbots has changed since 2023, there is a lot we can learn from a situation Samsung experienced a few years ago, when employees in their semiconductor division shared source code, meeting notes, and test sequences with ChatGPT. This exposed proprietary software to OpenAI and leaked sensitive hardware testing methods. Samsung's response was swift: they restricted ChatGPT uploads to minimize the potential for sharing sensitive information, launched internal investigations, and began developing a company-specific AI chatbot to prevent future leaks. While most companies lack the resources to build chatbots themselves, they can achieve a similar approach by using an enterprise-grade account that specifically opts out their accounts from AI training. AI can bring massive productivity gains, but that doesn't make its usage risk-free. Organizations that anticipate and address this challenge will leverage AI's benefits while maintaining the security of their most valuable information. The key is recognizing that AI overconfidence poses risks both inside and outside the office, and preparing accordingly.


Entrepreneur
20 minutes ago
- Entrepreneur
Ready to Get Off the Social Media Hamster Wheel? Discover the Platform That Actually Boosts Your Discoverability
Opinions expressed by Entrepreneur contributors are their own. The digital space is noisy and fast-moving. For solopreneurs, staying visible is one of the biggest challenges. While many chase quick wins on Instagram or LinkedIn, the most reliable platform for long-term visibility remains YouTube. As artificial intelligence changes how people search for information, YouTube isn't just useful — it's essential. A 2024 Statista survey found that 63% of Gen Z prefer YouTube over traditional search engines, and 58% turn to TikTok. This signals a major shift in behavior: people want video content that is direct, trustworthy and easy to engage with. And as AI tools like ChatGPT and Gemini become more embedded in everyday searches, video is becoming the format people trust most when they need real answers. Why YouTube still works Unlike social platforms where content disappears in hours, YouTube acts more like a searchable library. A single helpful video can be discovered, indexed and recommended for months or even years. That gives solopreneurs a major advantage — a way to stay visible without constantly churning out new content. AI is supercharging this effect. When someone types a question into tools like Google, ChatGPT or Gemini — "How do I price my services as a freelance designer?" for example — the answer often includes a recommended YouTube video. That video is usually pulled from a channel that's clear, specific and helpful. This creates a major opportunity for small business owners to show up right when potential clients are looking. Related: With the Rise of AI and Social Media-Driven Search, How Can Businesses Adapt Their SEO Strategies? My experience: showing up when it matters I started my YouTube channel, See Your House Now, 14 years ago. We have just over 1,000 subscribers — not a massive number — but that's not the point. People find us when they need us. Our most-watched video is titled Who is the best real estate photographer in Waterloo Region? That's a question people search using AI tools, and because of that video, we stand out and get clients. For us, it's never been about going viral. It's about being in the right place at the right time. Most of our in-person video service inquiries come from people who first found us on YouTube. We also host client real estate tours on the channel, which adds more value. It's a quiet engine for our business — and best of all, it's free. Five steps to build a YouTube presence that lasts Start a purpose-driven channel If you don't already have a channel, start now. Focus your content on the specific questions your audience is asking. Instead of broad topics like "how to grow a business," choose direct, niche-specific titles like "how to get clients as a solo architect" or "how to raise rates as a copywriter." Make each video answer a real question your ideal client would type into a search bar. Use AI to reverse engineer your titles You don't need to be tech-savvy to use ChatGPT or Gemini. Ask what questions your audience might be searching or request optimized title suggestions for your niche. Try prompts like, "What would a real estate agent in Houston search for on YouTube right now?" These insights help build a stronger content calendar that aligns with what people actually want to watch. Optimize your metadata Help both people and algorithms understand your videos. Use keyword-rich titles and descriptions, relevant tags and upload transcripts to boost accessibility and search ranking. Small details like this help your videos surface when and where they matter most. Design thumbnails that stand out Your thumbnail is often the first impression. Keep it bold, simple and consistent. Use a high-quality image of your face, a short phrase with large text, and a clear visual identity so people instantly recognize your videos. Free tools like Canva make this easy — just search for "most popular" templates to get started. Create a content series Don't think in one-off videos. Build short series around your core topics. This improves discoverability — no matter how someone phrases a question, they're more likely to land on your channel. A focused series also builds trust faster than scattered videos. Related: Why Gen Z is Ditching Old Online Platforms for More Control The future of being found For solopreneurs, the biggest challenge isn't making great content — it's making sure people find it long after you've posted. YouTube solves that problem. Especially when paired with smart AI tools, it gives your work a longer shelf life and higher impact. This isn't about chasing trends or going viral. It's about showing up with clarity, consistency, and value when your audience is ready. If you create content that answers real questions, YouTube will continue to work for you long after it's posted. And that's the kind of strategy that leads to sustainable growth. Ready to break through your revenue ceiling? Join us at Level Up, a conference for ambitious business leaders to unlock new growth opportunities.