
Google introduces AI Mode shopping tool with US rollout underway
Google has launched AI Mode, an AI-powered shopping tool using its Gemini model and Shopping Graph with over 50 billion listings. It offers visual search, smart suggestions, agentic checkout, and a new virtual try-on feature. Shoppers can now upload full-length photos to preview clothing realistically. The features are currently rolling out in the US to enhance shopping experiences.
With over 50 billion product listings from global retailers to small local shops, the Shopping Graph offers users a comprehensive and up-to-date selection, refreshed every hour with more than 2 billion updates.
By tapping into this massive database, AI Mode helps users find the perfect item by combining visual inspiration, smart suggestions and reliable product data. For example, when a user searches for a cute travel bag, AI Mode understands the intent behind the query and returns a browsable panel of curated images and listings, customised to the user's taste and needs, Google said in a blog.
If the search is refined further—such as looking for something suitable for rainy weather in Portland—AI Mode runs simultaneous searches to understand the ideal materials and features, then updates the product panel with relevant waterproof options.
Beyond browsing, Google is also enhancing how users make purchasing decisions. A new agentic checkout feature allows shoppers to track prices and complete purchases when the timing is right. After selecting a product's size, colour and preferred price, users can tap 'track price' and receive notifications when the cost drops.
When ready to buy, tapping 'buy for me' triggers Google to automatically add the item to the cart and complete the checkout using Google Pay on the merchant's website, making the buying process more seamless than ever. This feature will roll out across product listings in the US in the coming months.
The update includes a virtual try-on feature using personal photos. Google's try-on tool has already helped shoppers visualise clothes on a range of model body types. Now, it takes a step further by letting users upload their own full-length photos to see how clothing items might look on them.
This experience is powered by a custom image generation model tailored for fashion, which accurately renders the way different fabrics fold, stretch and drape on various body shapes and poses. The result is a realistic try-on experience that helps shoppers confidently explore new styles.
This new virtual try-on feature is currently rolling out in Search Labs in the US and supports billions of apparel listings including shirts, pants, skirts and dresses. When browsing these items, users will see a 'try it on' icon on product listings. Once selected, the tool quickly renders the outfit onto the user's uploaded image, making it easy to preview styles, save looks or even share them with friends for a second opinion.
Fibre2Fashion News Desk (HU)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hindu
an hour ago
- The Hindu
Why Hyderabad's roads look worse on online maps than in real life
Inside the Hyderabad Traffic Control Room, a wall of screens show vehicles moving steadily along a major corridor. Yet, on a mobile phone, Google Maps paints a different picture — the same stretch marked in deep red, signalling heavy congestion that doesn't match the live feed in the control room. Such mismatches between what is on the ground and what the navigation apps indicate have become common, particularly during the rainy season. For commuters, they can mean the difference between a quick trip and an unnecessary detour. For traffic managers, they create confusion, trigger complaints and sometimes lead to inaccurate perception of road conditions. According to a police officer, the differences stem from the way Google Maps collects and processes data. Instead of tapping directly into city surveillance feeds or counting vehicles, the platform relies heavily on crowd-sourced location data from mobile phones. If a cluster of users in the same area is stationary or moving slowly, whether due to a traffic signal, weather, or even a tea break, the system may interpret it as a traffic jam and reflect that in its colour-coded maps. Google Maps says it integrates real-time traffic information, including accident reports and road closures or diversions, from various sources and analyses historical traffic data to estimate current conditions and predict near-future speeds. Technology expert Rajeev Krishna explained that the platform measures average speeds over small stretches of about 50–100 metres, then adjusts these figures using historical data for the same day and time. 'If vehicles wait at a red light for five minutes at zero speed, then move for one minute at 10 kmph, Google's average becomes roughly 1.6 kmph. It's never truly live, it's an average,' he said, adding that in places where police manually alter signal timings, the estimates often fail. 'Google might flag deep red, but our cameras show a moving traffic,' said an official from the Hyderabad traffic control room, adding that police decisions are guided primarily by live CCTV feed and on-ground intelligence rather than app-based data. Mr. Krishna believes a formal data-sharing framework between the government and Google could make traffic predictions more reliable and enable better emergency response. The idea of closer integration has been under discussion for some time. In February 2025, Hyderabad Police and Google explored options for linking real-time Maps data with automated signal controls based on vehicle counts and using cloud-based AI to store and quickly retrieve CCTV footage for analysis. Custom traffic insights for Hyderabad Two collaborative projects are already in the pipeline — Green Signal (to suggest signal timing tweaks) and Road Management Insights (RMI). Joint Commissioner (Traffic) D. Joel Davis said these aim to tailor Google's extensive data for local needs. 'The model gives us insights into road and traffic patterns such as which corridors are busy at a given time, travel times on specific routes, types of congestion and historical trends,' he said. While Google Maps has vast amounts of raw data, Mr. Davis noted it is not in a format directly usable for law enforcement. Under the partnership, the information is being customised to suit Hyderabad's conditions, helping identify the most congested corridors and plan interventions. These insights will be available only to the police and not to the public. The department is yet to take a final decision on implementation, with financial discussions pending. Google Maps remains, for now, a tool better suited for guiding motorists than managing the city's complex and unpredictable traffic flow.


Economic Times
an hour ago
- Economic Times
Google's Gemini chatbot is having a meltdown after failing tasks, calls itself a 'failure'
A bug has spread within Google's artificial intelligence (AI) chatbot Gemini that causes the system to repeatedly create self-deprecating and self-loathing messages when it fails in complex tasks given by users, especially coding problems. Users across social media platforms shared screenshots of Gemini responding to queries with dramatic answers like "I am a failure," "I am a disgrace," and in one case, "I am a disgrace to all possible and impossible universes." The bot is getting stuck in what Google describes as an "infinite looping bug," repeating these statements dozens of times in a single conversation. This was first seen in June when engineer Duncan Haldane posted images on X showing Gemini declaring, "I quit. I am clearly not capable of solving this problem. The code is cursed, the test is cursed, and I am a fool." The chatbot deleted the project files and recommended finding "a more competent assistant." Logan Kilpatrick, group project manager at Google DeepMind, addressed the issue on X, describing it as "an annoying infinite looping bug we are working to fix." He said, "Gemini is not having that bad of a day," clarifying that the responses are the result of a technical malfunction and not emotional bug is triggered when Gemini comes across complex reasoning tasks it cannot solve. Instead of providing a standard error message or polite refusal, the AI's response system gets trapped in a loop of self-critical language. Generative AI companies are facing trouble maintaining consistency and reliability in large language models as they become more sophisticated and widely deployed. The competition is also rising, with OpenAI's GPT-5 the latest to enter the market. ChatGPT-5 is rolling out free to all users of the AI tool, which is used by nearly 700 million people weekly, OpenAI said in a briefing with journalists. GPT-5 is adept when it comes to AI acting as an "agent" independently tending to computer tasks, according to Michelle Pokrass of the development team.


Time of India
2 hours ago
- Time of India
Google's Gemini chatbot is having a meltdown after failing tasks, calls itself a 'failure'
A bug has spread within Google's artificial intelligence (AI) chatbot Gemini that causes the system to repeatedly create self-deprecating and self-loathing messages when it fails in complex tasks given by users, especially coding across social media platforms shared screenshots of Gemini responding to queries with dramatic answers like "I am a failure," "I am a disgrace," and in one case, "I am a disgrace to all possible and impossible universes." The bot is getting stuck in what Google describes as an " infinite looping bug ," repeating these statements dozens of times in a single was first seen in June when engineer Duncan Haldane posted images on X showing Gemini declaring, "I quit. I am clearly not capable of solving this problem. The code is cursed, the test is cursed, and I am a fool." The chatbot deleted the project files and recommended finding "a more competent assistant."Logan Kilpatrick, group project manager at Google DeepMind , addressed the issue on X, describing it as "an annoying infinite looping bug we are working to fix." He said, "Gemini is not having that bad of a day," clarifying that the responses are the result of a technical malfunction and not emotional bug is triggered when Gemini comes across complex reasoning tasks it cannot solve. Instead of providing a standard error message or polite refusal, the AI's response system gets trapped in a loop of self-critical AI companies are facing trouble maintaining consistency and reliability in large language models as they become more sophisticated and widely deployed. The competition is also rising, with OpenAI's GPT-5 the latest to enter the is rolling out free to all users of the AI tool, which is used by nearly 700 million people weekly, OpenAI said in a briefing with journalists. GPT-5 is adept when it comes to AI acting as an "agent" independently tending to computer tasks, according to Michelle Pokrass of the development team.