logo
Google launches AI-powered search engine in 'reimagining' of web tool

Google launches AI-powered search engine in 'reimagining' of web tool

The Star24-05-2025

LONDON: Google is to begin offering a fully artificial intelligence-powered version of its search engine in what it calls a "total reimagining" of one of the web's foundational features.
The US tech giant will begin the public roll-out of AI Mode in Search in the United States on Tuesday, the company confirmed at its annual I/O developer conference.
The new feature, which will appear as a tab at the top of the search engine, will allow users to ask longer, more complex queries as well as follow-up questions to dive deeper into a topic, Google said.
It builds on the AI Overviews the company already places at the top of many search results, which are AI-generated summaries in response to a query, alongside links to sources.
The firm's chief executive Sundar Pichai said the update was a "total reimagining of search" that would use "advanced reasoning" to think before it answered a user's query.
Pichai said early testers had been asking queries "two to three times, sometimes as long as five time the length of traditional searches."
"It's been a pretty exciting moment for Search," he said.
"People are excited. It's made the web itself more exciting. I think people are engaging a lot more across the board and so it feels like a very positive moment for us."
The announcement came amid a flurry of updates from the company during its I/O conference, with tools utilising the firm's Gemini AI dominating the new products and services unveiled.
This is despite Google and others in the tech sector having a number of high-profile issues with previous AI products over the last year, with generative tools returning inaccurate or misleading results on a number of occasions.
Concerns also remain about the impact of the technology on the jobs market because of its potential to replace human workers, the data privacy implications of AI being trained on data scrapped from the public web and safety concerns around AI's ability to supercharge misinformation, help cybercriminals become more sophisticated or being used to create more dangerous weapons.
Google's latest product revelations also come as the company faces growing questions about the future of its online search business, with some reports suggesting search engines are now being used less as more users turn to AI chatbots – such as OpenAI's ChatGPT – instead.
Other notable updates at Google's event included real-time translation between English and Spanish being introduced to Google Meet video calls, with AI dubbing the translation over the top of the speaker so the other person can understand what is being said instantly.
In addition, Google announced Project Mariner, an AI agent which can be instructed to carry out tasks on the web, such as online shopping or restaurant booking, on behalf of the user.
AI agents, tools which can are capable of carrying out specific tasks autonomously, have been earmarked by many in the industry as the next step in AI tool evolution.
During its I/O keynote, Google also announced it was launching new personal context tools, which use AI to lift relevant data from other Google apps – such as Docs or Calendar – and apply it to its existing Smart Replies tool in Gmail to create personalised automated replies that sound like the user's style of writing.
"What all this progress tells me is that we are now entering a new phase of the AI platform shift," Pichai said.
"We are starting to bring agent capabilities to Chrome, Search and the Gemini app."
"It's a new and emerging area, but we want to get it in the hands of people so that we can explore how to bring the benefits of agents to as many users as possible and get feedback from the broader ecosystem."
"We want to make it as useful as possible, to fit your reality. That's why we are working on something called personal context with your permission."
"Gemini models can use relevant context from across your Google apps in a way that's private, transparent and fully under your control. You can choose to connect what you want and turn off what you don't want," Pichai said.
"One example we are introducing is personalised smart replies in Gmail. It takes Smart Reply a step further, pulling relevant information from Google Docs and past emails, it matches your tone and style, and generated the email as you would have written it, personalised."
The array of announcements also included a video AI-generated video and audio tool, as well as a new product to help filmmakers called Flow.
These new products come despite ongoing concern from the creative industries about the potential impact of AI-generated content on the the film, TV and music sectors.
Elsewhere, the company also unveiled a new video conferencing device called Google Beam, which uses AI and six cameras to create a 3D video experience which Google said feels closer to natural, in-person conversations. – dpa/Tribune News Service

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI personal shoppers hunt down bargain buys
AI personal shoppers hunt down bargain buys

Sinar Daily

time4 hours ago

  • Sinar Daily

AI personal shoppers hunt down bargain buys

NEW YORK – Internet giants are diving deeper into e-commerce with digital aides that know shoppers' preferences, let them virtually try on clothes, hunt for deals, and even place orders. The rise of virtual personal shoppers stems from generative artificial intelligence (AI) being deployed in "agents" that specialise in specific tasks and are granted autonomy to complete them independently. 'This is basically the next evolution of shopping experiences,' said CFRA Research analyst Angelo Zino. Google last week unveiled shopping features built into a new 'AI Mode'. It can take a person's own photo and blend it with that of a skirt, shirt, or other piece of clothing spotted online, showing how it would look on them. The AI adjusts the clothing size to fit, accounting for how fabrics drape, according to Google's Head of Advertising and Commerce, Vidhya Srinivasan. Shoppers can then set the price they are willing to pay and leave the AI to tirelessly browse the internet for a deal — alerting the shopper when one is found, and asking whether it should proceed with the purchase using Google's payment platform. 'They're taking on Amazon a little bit,' said Techsponential analyst Avi Greengart of Google. The tool is also a way to monetise AI by increasing online traffic and opportunities to display ads, Greengart added. The Silicon Valley tech titan did not respond to a query regarding whether it is sharing revenue from shopping transactions. Bartering bots? OpenAI added a shopping feature to ChatGPT earlier this year, enabling the chatbot to respond to requests with product suggestions, consumer reviews, and links to merchant websites. Perplexity AI began allowing subscribers to pay for online purchases without leaving its app late last year. In April, Amazon introduced a 'Buy for Me' mode to its Rufus digital assistant, enabling users to command it to make purchases on retailer websites outside Amazon's own platform. Walmart's Head of Technology, Hari Vasudev, recently spoke about adding an AI agent to the retail behemoth's online shopping portal, while also working with partners to ensure their digital agents prioritise Walmart products. Global payment networks Visa and Mastercard both announced in April that their systems had been modernised to enable payment transactions by digital agents. 'As AI agents start to take over the bulk of product discovery and the decision-making process, retailers must consider how to optimise for this new layer of AI shoppers,' said Elise Watson of Clarkston Consulting. Retailers are likely to be left in the dark when it comes to what makes a product attractive to AI agents, according to Watson. Knowing the customer Zino does not expect AI shoppers to trigger an upheaval in the e-commerce industry, but he does see the technology benefiting Google and Meta. Not only do the internet rivals possess vast amounts of data about their users, but they are also among the frontrunners in the AI race. 'They probably have more information on the consumer than anyone else out there,' Zino said of Google and Meta. Technology firms' access to user data touches on the hot-button issue of online privacy and who should control personal information. Google plans to refine consumer profiles based on search activity and promises that shoppers will need to authorise access to additional information, such as emails or app usage. Trusting a chatbot with purchasing decisions may alarm some users, and while the technology may be in place, the legal and ethical framework is not yet fully developed. 'The agent economy is here,' said PSE Consulting Managing Director Chris Jones. 'The next phase of e-commerce will depend on whether we can trust machines to buy on our behalf.' - AFP

Hey chatbot, is this true? AI's answer: not really, say fact-checkers
Hey chatbot, is this true? AI's answer: not really, say fact-checkers

Malay Mail

time6 hours ago

  • Malay Mail

Hey chatbot, is this true? AI's answer: not really, say fact-checkers

WASHINGTON, June 2 — As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification — only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots — including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini — in search of reliable information. 'Hey @Grok, is this true?' has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok — now under renewed scrutiny for inserting 'white genocide,' a far-right conspiracy theory, into unrelated queries — wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as 'likely' showing Pakistan's military response to Indian strikes. 'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,' McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. 'Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,' she warned. 'Fabricated' NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Centre for Digital Journalism at Columbia University found that chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.' When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labelled a purported video of a giant anaconda swimming in the Amazon River as 'genuine,' even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularized by X. Researchers have repeatedly questioned the effectiveness of 'Community Notes' in combating falsehoods. 'Biased answers' Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content — something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an 'unauthorized modification' for causing Grok to generate unsolicited posts referencing 'white genocide' in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 'most likely' culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people. 'We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,' Angie Holan, director of the International Fact-Checking Network, told AFP. 'I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers.' — AFP

Hey chatbot, is this true? AI ‘factchecks' sow misinformation
Hey chatbot, is this true? AI ‘factchecks' sow misinformation

The Sun

time6 hours ago

  • The Sun

Hey chatbot, is this true? AI ‘factchecks' sow misinformation

WASHINGTON: As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. 'Hey @Grok, is this true?' has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting 'white genocide,' a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as 'likely' showing Pakistan's military response to Indian strikes. 'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,' McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. 'Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,' she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.' When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as 'genuine,' even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularized by X. Researchers have repeatedly questioned the effectiveness of 'Community Notes' in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an 'unauthorized modification' for causing Grok to generate unsolicited posts referencing 'white genocide' in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 'most likely' culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people. 'We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,' Angie Holan, director of the International Fact-Checking Network, told AFP. 'I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store