
AI is changing shopping. Will consumers buy in?
Carolyn Bennett remembers flipping through a yellow telephone book in the 1980s to find carpet stores and workers who refinished wood to help renovate her home.
Today, the 67-year-old uses a chatbot to help her shop. Bennett has turned to ChatGPT, which she refers to as "Chat," to find vendors for a kitchen renovation project, compare heat pumps and weigh in on whether she should buy a convection oven.
The San Francisco resident could have browsed websites on Google, but she prefers using ChatGPT to save time.
"Any product that has multiple features that you want to compare across different products, I think it's super helpful," she said.
The rising popularity of artificial intelligence-powered chatbots that can generate text and images is already changing the way people brainstorm ideas, write and research. Tech companies and payment services are also betting that AI will transform how people shop. They're even experimenting with AI agents that can place orders on a customer's behalf with their permission.
Google, Amazon and other major tech platforms envision a future where online shopping becomes even more personalised and proactive. But companies will also have to convince consumers to buy into the idea, ensuring them that they're protecting their privacy and providing accurate results.
AI chatbots have spewed out incorrect or nonsensical information before. And shoppers might be reluctant to give control to AI agents, especially when it comes to handing them their credit card, some retail experts said.
"There's a lot of concern about the reliability of these kinds of tools," said Rachel Wolff, a retail and ecommerce analyst at eMarketer. "So you might not want to trust these agents fully to make decisions on your behalf."
For now, AI shopping experiences are growing. Last month, OpenAI said it's experimenting with new shopping features, including a way to see images and prices of several products along with links for people to buy the items.
Perplexity, which introduced a new feature last year that allows subscribers to buy items through its chatbot, also teamed up with Visa to help improve its shopping experience in the future.
"Visa knows a lot about its customers, and if customers opt in, there can be that anonymised data sharing, so that the recommendations you get in Perplexity are in line with your kind of purchase and transaction history so you can get better quality answers," said Dmitry Shevelenko, Perplexity's chief business officer.
(The Los Angeles Times partners with Perplexity to generate summaries of ideas expressed in opinion pieces.)
These efforts are still early, but AI companies are also trying to differentiate themselves from rivals such as Amazon and Google that also have chatbots and AI shopping features. Both Perplexity and OpenAI note the products shown within their chatbots are not ads. The chatbots cite websites that review and rate mattresses, coffee makers and other products.
Google also is stepping up its AI shopping features as it competes with OpenAI. Last week, the search giant said in the coming months people will be able to use AI mode, a tool where people can ask questions and get answers like they would to a chatbot, to find and compare products. The tool is powered by Google's AI model Gemini.
Vidhya Srinivasan, who leads the Ads and Commerce teams at Google, said in a press briefing before Google's annual I/O developers conference that the company displays search results in AI mode based on what's most relevant to questions people are asking.
Some of the results also highlighted reviews from websites, but Google has more than 50 billion product listings and that information gets refreshed.
"We're doing even more personalisation in this mode, where we get to personalise based on brands and styles," Srinivasan said.
The Mountain View-based company is exploring and experimenting with ads in their AI shopping experiences. Google unveiled other AI shopping tools, including a way to try out clothes virtually and buy products when the price falls.
Visa executives say they envision a future in which AI agents will book plane tickets, hotel rooms and other services and products on behalf of the customer with their approval.
Rajat Taneja, president of technology for Visa, said that people will be able to set limits around what an AI agent could purchase like when someone hands over their credit card to a friend, family member or assistant, to help them shop.
The San Francisco-based payment company, partnering with such AI companies as OpenAI, Perplexity and Anthropic, unveiled a new initiative in late April to enable AI agents to shop and buy products for people but that work is still being tested.
Product recommendations, he said, will only get more personalised in the future.
"They're going to be different ways in which this will manifest itself, much like the analogy of the internet has evolved in so many different ways," Taneja said. "The most important thing is we are all unique, in our likes, in our dislikes, in what we gravitate towards and what we buy."
Consumers are already using generative AI for shopping, research shows. Adobe Analytics, which surveyed 5,000 U.S. consumers, said that 39% reported using generative AI for online shopping and 53% planned to do so this year. Shoppers used generative AI for research, product recommendations, deals and other shopping tasks, according to the survey.
Capgemini Research Institute, which surveyed 12,000 adult consumers across 12 countries, found that 24% of consumers used generative AI in shopping experiences. The use was higher among Gen Z and millennials compared to Gen X and boomers. But the survey also found that consumer satisfaction with generative AI also fell.
Elliot Padfield, a 21-year-old growth marketing consultant in San Francisco, uses AI for other tasks but he says the shopping experience has fallen short. As a result, he doesn't always trust a chatbot's recommendations.
When he tried out shopping on Perplexity for the first time, his order never arrived but he was able to get a refund.
And while chatbots can provide a comparison of four types of wireless headphones, for example, he wants more information about how the recommendations fit his needs and priorities.
"I still have to guide the AI through supporting me in the way that I need it to," he said. "I actually find it easier at that point to then just go to the retailer."
From going to the mall to shopping on websites or through social media, retail experts see generative AI as just another option for consumers.
Retailers will have to learn how to navigate chatbots that might not recommend their products. But AI could also level the playing field for small businesses, experts say, if the results aren't based on optimising for a search engine or buying a ton of ads.
Caroline Reppert, director of AI and Technology Policy at the National Retail Federation, said she thinks generative AI is here to stay. Ultimately retailers will meet consumers where they are, she said. – Los Angeles Times/Tribune News Service

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Sinar Daily
3 hours ago
- Sinar Daily
AI personal shoppers hunt down bargain buys
NEW YORK – Internet giants are diving deeper into e-commerce with digital aides that know shoppers' preferences, let them virtually try on clothes, hunt for deals, and even place orders. The rise of virtual personal shoppers stems from generative artificial intelligence (AI) being deployed in "agents" that specialise in specific tasks and are granted autonomy to complete them independently. 'This is basically the next evolution of shopping experiences,' said CFRA Research analyst Angelo Zino. Google last week unveiled shopping features built into a new 'AI Mode'. It can take a person's own photo and blend it with that of a skirt, shirt, or other piece of clothing spotted online, showing how it would look on them. The AI adjusts the clothing size to fit, accounting for how fabrics drape, according to Google's Head of Advertising and Commerce, Vidhya Srinivasan. Shoppers can then set the price they are willing to pay and leave the AI to tirelessly browse the internet for a deal — alerting the shopper when one is found, and asking whether it should proceed with the purchase using Google's payment platform. 'They're taking on Amazon a little bit,' said Techsponential analyst Avi Greengart of Google. The tool is also a way to monetise AI by increasing online traffic and opportunities to display ads, Greengart added. The Silicon Valley tech titan did not respond to a query regarding whether it is sharing revenue from shopping transactions. Bartering bots? OpenAI added a shopping feature to ChatGPT earlier this year, enabling the chatbot to respond to requests with product suggestions, consumer reviews, and links to merchant websites. Perplexity AI began allowing subscribers to pay for online purchases without leaving its app late last year. In April, Amazon introduced a 'Buy for Me' mode to its Rufus digital assistant, enabling users to command it to make purchases on retailer websites outside Amazon's own platform. Walmart's Head of Technology, Hari Vasudev, recently spoke about adding an AI agent to the retail behemoth's online shopping portal, while also working with partners to ensure their digital agents prioritise Walmart products. Global payment networks Visa and Mastercard both announced in April that their systems had been modernised to enable payment transactions by digital agents. 'As AI agents start to take over the bulk of product discovery and the decision-making process, retailers must consider how to optimise for this new layer of AI shoppers,' said Elise Watson of Clarkston Consulting. Retailers are likely to be left in the dark when it comes to what makes a product attractive to AI agents, according to Watson. Knowing the customer Zino does not expect AI shoppers to trigger an upheaval in the e-commerce industry, but he does see the technology benefiting Google and Meta. Not only do the internet rivals possess vast amounts of data about their users, but they are also among the frontrunners in the AI race. 'They probably have more information on the consumer than anyone else out there,' Zino said of Google and Meta. Technology firms' access to user data touches on the hot-button issue of online privacy and who should control personal information. Google plans to refine consumer profiles based on search activity and promises that shoppers will need to authorise access to additional information, such as emails or app usage. Trusting a chatbot with purchasing decisions may alarm some users, and while the technology may be in place, the legal and ethical framework is not yet fully developed. 'The agent economy is here,' said PSE Consulting Managing Director Chris Jones. 'The next phase of e-commerce will depend on whether we can trust machines to buy on our behalf.' - AFP


The Sun
5 hours ago
- The Sun
Hey chatbot, is this true? AI ‘factchecks' sow misinformation
WASHINGTON: As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. 'Hey @Grok, is this true?' has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting 'white genocide,' a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as 'likely' showing Pakistan's military response to Indian strikes. 'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,' McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. 'Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,' she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.' When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as 'genuine,' even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularized by X. Researchers have repeatedly questioned the effectiveness of 'Community Notes' in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an 'unauthorized modification' for causing Grok to generate unsolicited posts referencing 'white genocide' in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 'most likely' culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people. 'We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,' Angie Holan, director of the International Fact-Checking Network, told AFP. 'I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers.'


The Sun
5 hours ago
- The Sun
Hey AI, is this true? AI ‘factchecks' sow misinformation
WASHINGTON: As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. 'Hey @Grok, is this true?' has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting 'white genocide,' a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as 'likely' showing Pakistan's military response to Indian strikes. 'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,' McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. 'Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,' she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.' When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as 'genuine,' even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularized by X. Researchers have repeatedly questioned the effectiveness of 'Community Notes' in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an 'unauthorized modification' for causing Grok to generate unsolicited posts referencing 'white genocide' in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 'most likely' culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people. 'We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,' Angie Holan, director of the International Fact-Checking Network, told AFP. 'I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers.'