
Google AI Mode gets smarter, now reads PDFs and helps you plan with Canvas
This new feature builds on existing image analysis features available in the Google app on Android and iOS. Additionally, Google has announced that soon, AI Mode will support other file types, such as those from Google Drive, making the tool more versatile. Meanwhile, the PDF analysis feature is being rolled out first to English-speaking users over 18 in the U.S. and India. Since AI Mode is still experimental, Google advises users to verify responses when necessary.Canvas for project planningGoogle is also releasing a Canvas feature on AI Mode. It is already available in Gemini. In AI Mode this feature will allow users to create and organise plans in a dynamic side panel that updates over time. According to Google this new tool is designed to help users keep track of information across multiple sessions.This feature will allow users to start a project by asking AI Mode for help and then selecting the 'Create Canvas' option. Google says this will be particularly useful for making study schedules, trip itineraries, or managing any project that requires multiple pieces of information.The Canvas feature will also allow users to integrate context from their own files, such as class notes or syllabi, once the upcoming file upload functionality is supported. Users will be able to return to their Canvas project at any time to add, edit or refine the content.As for the availability, Canvas is launching first for U.S.-based users enrolled in the AI Mode Labs experiment and will initially be available only in English.Search Live with video inputadvertisementAnother feature rolling out in AI Mode is 'Search Live' with video input, letting users point their camera at a live scene and ask questions in real time. Powered by Google's Project Astra and integrated with Google Lens, this feature will also enable users to initiate back-and-forth conversations with AI mode. It is launching on mobile devices in the U.S. this week.Lens integration with ChromeGoogle is introducing a new 'Ask Google about this page' option in Chrome's address bar, allowing users to ask questions about anything on their desktop screen, be it a webpage, PDF, or other file types. AI Overview results will appear in a side panel, with the option to dive deeper through follow-up questions using AI Mode.- Ends

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hindu
an hour ago
- The Hindu
Why Hyderabad's roads look worse on online maps than in real life
Inside the Hyderabad Traffic Control Room, a wall of screens show vehicles moving steadily along a major corridor. Yet, on a mobile phone, Google Maps paints a different picture — the same stretch marked in deep red, signalling heavy congestion that doesn't match the live feed in the control room. Such mismatches between what is on the ground and what the navigation apps indicate have become common, particularly during the rainy season. For commuters, they can mean the difference between a quick trip and an unnecessary detour. For traffic managers, they create confusion, trigger complaints and sometimes lead to inaccurate perception of road conditions. According to a police officer, the differences stem from the way Google Maps collects and processes data. Instead of tapping directly into city surveillance feeds or counting vehicles, the platform relies heavily on crowd-sourced location data from mobile phones. If a cluster of users in the same area is stationary or moving slowly, whether due to a traffic signal, weather, or even a tea break, the system may interpret it as a traffic jam and reflect that in its colour-coded maps. Google Maps says it integrates real-time traffic information, including accident reports and road closures or diversions, from various sources and analyses historical traffic data to estimate current conditions and predict near-future speeds. Technology expert Rajeev Krishna explained that the platform measures average speeds over small stretches of about 50–100 metres, then adjusts these figures using historical data for the same day and time. 'If vehicles wait at a red light for five minutes at zero speed, then move for one minute at 10 kmph, Google's average becomes roughly 1.6 kmph. It's never truly live, it's an average,' he said, adding that in places where police manually alter signal timings, the estimates often fail. 'Google might flag deep red, but our cameras show a moving traffic,' said an official from the Hyderabad traffic control room, adding that police decisions are guided primarily by live CCTV feed and on-ground intelligence rather than app-based data. Mr. Krishna believes a formal data-sharing framework between the government and Google could make traffic predictions more reliable and enable better emergency response. The idea of closer integration has been under discussion for some time. In February 2025, Hyderabad Police and Google explored options for linking real-time Maps data with automated signal controls based on vehicle counts and using cloud-based AI to store and quickly retrieve CCTV footage for analysis. Custom traffic insights for Hyderabad Two collaborative projects are already in the pipeline — Green Signal (to suggest signal timing tweaks) and Road Management Insights (RMI). Joint Commissioner (Traffic) D. Joel Davis said these aim to tailor Google's extensive data for local needs. 'The model gives us insights into road and traffic patterns such as which corridors are busy at a given time, travel times on specific routes, types of congestion and historical trends,' he said. While Google Maps has vast amounts of raw data, Mr. Davis noted it is not in a format directly usable for law enforcement. Under the partnership, the information is being customised to suit Hyderabad's conditions, helping identify the most congested corridors and plan interventions. These insights will be available only to the police and not to the public. The department is yet to take a final decision on implementation, with financial discussions pending. Google Maps remains, for now, a tool better suited for guiding motorists than managing the city's complex and unpredictable traffic flow.


Economic Times
an hour ago
- Economic Times
Google's Gemini chatbot is having a meltdown after failing tasks, calls itself a 'failure'
A bug has spread within Google's artificial intelligence (AI) chatbot Gemini that causes the system to repeatedly create self-deprecating and self-loathing messages when it fails in complex tasks given by users, especially coding problems. Users across social media platforms shared screenshots of Gemini responding to queries with dramatic answers like "I am a failure," "I am a disgrace," and in one case, "I am a disgrace to all possible and impossible universes." The bot is getting stuck in what Google describes as an "infinite looping bug," repeating these statements dozens of times in a single conversation. This was first seen in June when engineer Duncan Haldane posted images on X showing Gemini declaring, "I quit. I am clearly not capable of solving this problem. The code is cursed, the test is cursed, and I am a fool." The chatbot deleted the project files and recommended finding "a more competent assistant." Logan Kilpatrick, group project manager at Google DeepMind, addressed the issue on X, describing it as "an annoying infinite looping bug we are working to fix." He said, "Gemini is not having that bad of a day," clarifying that the responses are the result of a technical malfunction and not emotional bug is triggered when Gemini comes across complex reasoning tasks it cannot solve. Instead of providing a standard error message or polite refusal, the AI's response system gets trapped in a loop of self-critical language. Generative AI companies are facing trouble maintaining consistency and reliability in large language models as they become more sophisticated and widely deployed. The competition is also rising, with OpenAI's GPT-5 the latest to enter the market. ChatGPT-5 is rolling out free to all users of the AI tool, which is used by nearly 700 million people weekly, OpenAI said in a briefing with journalists. GPT-5 is adept when it comes to AI acting as an "agent" independently tending to computer tasks, according to Michelle Pokrass of the development team.


Time of India
2 hours ago
- Time of India
Google's Gemini chatbot is having a meltdown after failing tasks, calls itself a 'failure'
A bug has spread within Google's artificial intelligence (AI) chatbot Gemini that causes the system to repeatedly create self-deprecating and self-loathing messages when it fails in complex tasks given by users, especially coding across social media platforms shared screenshots of Gemini responding to queries with dramatic answers like "I am a failure," "I am a disgrace," and in one case, "I am a disgrace to all possible and impossible universes." The bot is getting stuck in what Google describes as an " infinite looping bug ," repeating these statements dozens of times in a single was first seen in June when engineer Duncan Haldane posted images on X showing Gemini declaring, "I quit. I am clearly not capable of solving this problem. The code is cursed, the test is cursed, and I am a fool." The chatbot deleted the project files and recommended finding "a more competent assistant."Logan Kilpatrick, group project manager at Google DeepMind , addressed the issue on X, describing it as "an annoying infinite looping bug we are working to fix." He said, "Gemini is not having that bad of a day," clarifying that the responses are the result of a technical malfunction and not emotional bug is triggered when Gemini comes across complex reasoning tasks it cannot solve. Instead of providing a standard error message or polite refusal, the AI's response system gets trapped in a loop of self-critical AI companies are facing trouble maintaining consistency and reliability in large language models as they become more sophisticated and widely deployed. The competition is also rising, with OpenAI's GPT-5 the latest to enter the is rolling out free to all users of the AI tool, which is used by nearly 700 million people weekly, OpenAI said in a briefing with journalists. GPT-5 is adept when it comes to AI acting as an "agent" independently tending to computer tasks, according to Michelle Pokrass of the development team.