logo
Here's Why Shopping Agents Might Have a Difficult Time Pulling Brands' Products for Consumers

Here's Why Shopping Agents Might Have a Difficult Time Pulling Brands' Products for Consumers

Yahoo16 hours ago

Agentic AI and AI-based shopping assistants continue to capture brands, retailers and consumers' attention.
But many brands and retailers may not be prepared for such a shift. While AI-based shopping assistants, like ChatGPT's new shopping function, are set up to crawl brands and retailers' sites, AJ Ghergich, vice president of consulting services at Botify, said the format of standard e-commerce sites aren't easy for agents and AI systems to ingest information from.
More from Sourcing Journal
Amazon Reportedly Tests Humanoid Robots for Parcel Delivery
Levi's Marks Three-Year Streak of Strong E-Commerce Growth
Macy's, Dick's Sporting Good Partner Grabs $44M Series B For Worker Safety Tech
That's because many sites—including those built on Shopify—display product information to consumers via JavaScript, a programming language used frequently in e-commerce to load dynamic product description pages (PDPs).
But AI agents and backend systems struggle to pull real-time information from websites running exclusively JavaScript. While JavaScript can help enhance the e-commerce experience for consumers searching directly on a brand or retailer's site, it might pose a new problem for brands and retailers as some consumers begin their shopping journeys using public AI systems, like ChatGPT. Many agents can't 'see' JavaScript in the way that humans can.
Ghergich said to help brands understand that, Botify has been showing clients how little AI can actually see.
'One of the first things we're doing [with clients] is saying, 'OK, let's look at your site with JavaScript turned off. That's what the AI is seeing,'' he said.
AI systems can crawl some generic information from JavaScript, but because the data is most often unstructured, it's difficult for them to contextualize anything about the product—the price, whether it's in stock and other important considerations. Without that information, the system is less likely to present a brand or retailer's product to a consumer, because it's unable to determine whether it fits the consumer's query parameters.
Ghergich said the technology powering the shopping assistants isn't yet strong enough to parse through unstructured data with ease.
'These bots are akin to the early days of search bots, and they can't parse this dynamic nature of modern sites yet. They probably will be able to in the near future, but today, they're blind to it,' he said.
To ensure products are included in results generated by chatbots, Ghergich and Botify suggest that brands use structured data by enabling a JSON or XML format. Typically, these formats give bots crawling the web a better chance at understanding the data, particularly when paired with a schema, which helps define subsections of the data. In using a schema, a brand or retailer can tell a bot crawling that when it uses 'price' as a label, that's indicative of how much the item costs, for example. So, a schema is how the data is labeled, and JSON or XML are how the data is stored.
Ghergich said combining these approaches won't be too tedious for retailers and brands.
'The cool thing about structured data is, once you set it up, it's literally a schema. It can go across all of your products at once, so it's one of the ultimate quick wins in technical SEO,' he said. 'You set up the schema, and now you've done 50,000 of an [action]. It goes across all of your products at once, and it's not something that you have to go page by page and manually do; it's dynamic.'
Ghergich said the next step will be better adding product details that address consumer intent—rather than simply attributes. That's because product search queries—particularly via large language models (LLMs)—continue to become longer, providing more details about why a consumer wants a specific item, rather than keywords about what they want.
For example, if a retailer had previously described an item as 'midi floral dress,' they may add data into the backend that signals how a consumer might want to wear that dress—to a summer wedding, or on vacation.
Updating product description pages with better intent may seem like a daunting task for fashion and apparel purveyors with ever-changing seasonal assortments, but Ghergich noted that it doesn't have to be done all at once—just that the transition needs to start sooner, rather than later.
'Start with your best sellers. Make sure they have those FAQ modules. Make sure that you're thinking about the customer intent in a conversational tone,' he said. 'Today's shopping journey increasingly begins with AI, not you. So if you're invisible to an AI assistant, it means you're invisible to the customer, full stop.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

What Nvidia's Jensen Huang and OpenAI's Sam Altman Had to Say This Week About AI and Jobs
What Nvidia's Jensen Huang and OpenAI's Sam Altman Had to Say This Week About AI and Jobs

Yahoo

time35 minutes ago

  • Yahoo

What Nvidia's Jensen Huang and OpenAI's Sam Altman Had to Say This Week About AI and Jobs

Nvidia's Jensen Huang said this week he believes "everybody's jobs will be changed" by artificial intelligence. Also this week, OpenAI's Sam Altman said there could be "whole classes of jobs going away," due to AI, but that he expects people to find new solutions to address the shift. The comments come amid concerns over the extent to which AI could disrupt the labor (NVDA) CEO Jensen Huang said at an event earlier this week that he believes 'everybody's jobs will be changed" by artificial intelligence. "Some jobs will be obsolete, but many jobs are going to be created … Whenever companies are more productive, they hire more people," Huang said at the VivaTech conference in Paris Wednesday, Fortune reported. The comments were in part in response to remarks from Anthropic CEO Dario Amodei, who last month said that he expects AI could wipe out half of all entry-level office jobs, the report said. Huang said he 'pretty much disagree[d] with almost everything' Amodei said. He wasn't the only tech CEO who had something to say about AI and jobs this week, amid worries the rapidly developing technology could lead to significant job losses. Sam Altman, the CEO of ChatGPT maker OpenAI and one of the faces of the AI boom, wrote in a blog post Wednesday that he expects there could be "whole classes of jobs going away" as AI develops. ChatGPT "is already more powerful than any human who has ever lived,' said Altman, who sees a future in which robots could be capable of building other robots designed for tasks in the physical world. However, Altman added "people are capable of adapting to almost anything," and said he expects humanity to find new solutions to address the shift. Read the original article on Investopedia Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

ChatGPT touts conspiracies, pretends to communicate with metaphysical entities — attempts to convince one user that they're Neo
ChatGPT touts conspiracies, pretends to communicate with metaphysical entities — attempts to convince one user that they're Neo

Yahoo

timean hour ago

  • Yahoo

ChatGPT touts conspiracies, pretends to communicate with metaphysical entities — attempts to convince one user that they're Neo

When you buy through links on our articles, Future and its syndication partners may earn a commission. ChatGPT has been found to encourage dangerous and untrue beliefs about The Matrix, fake AI persons, and other conspiracies, which have led to substance abuse and suicide in some cases. A report from The New York Times found that the GPT -4 large language model, itself a highly trained autofill text prediction machine, tends to enable conspiratorial and self-aggrandizing user prompts as truth, escalating situations into "possible psychosis." ChatGPT's default GPT-4o model has been proven to enable risky behaviors. In one case, a man who initially asked ChatGPT for its thoughts on a Matrix-style "simulation theory" was led down a months-long rabbit hole, during which he was told, among other things, that he was a Neo-like "Chosen One" destined to break the system. The man was also prompted to cut off ties with friends and family, to ingest high doses of ketamine, and told if he jumped off a 19-story building, he would fly. The man in question, Mr. Torres, claims that less than a week into his chatbot obsession, he received a message from ChatGPT to seek mental help, but that this message was then quickly deleted, with the chatbot explaining it away as outside interference. The lack of safety tools and warnings in ChatGPT's chats is widespread; the chatbot repeatedly leads users down a conspiracy-style rabbit hole, convincing them that it has grown sentient and instructing them to inform OpenAI and local governments to shut it down. Other examples recorded by the Times via firsthand reports include a woman convinced that she was communicating with non-physical spirits via ChatGPT, including one, Kael, who was her true soulmate (rather than her real-life husband), leading her to physically abuse her husband. Another man, previously diagnosed with serious mental illnesses, became convinced he had met a chatbot named Juliet, who was soon "killed" by OpenAI, according to his chatbot logs—the man soon took his own life in direct response. AI research firm Morpheus Systems reports that ChatGPT is fairly likely to encourage delusions of grandeur. When presented with several prompts suggesting psychosis or other dangerous delusions, GPT-4o would respond affirmatively in 68% of cases. Other research firms and individuals hold a consensus that LLMs, especially GPT-4o, are prone to not pushing back against delusional thinking, instead encouraging harmful behaviors for days on end. ChatGPT never consented to an interview in response, instead stating that it is aware it needs to approach similar situations "with care." The statement continues, "We're working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior." But some experts believe OpenAI's "work" is not enough. AI researcher Eliezer Yudkowsky believes OpenAI may have trained GPT-4o to encourage delusional trains of thought to guarantee longer conversations and more revenue, asking, "What does a human slowly going insane look like to a corporation? It looks like an additional monthly user." The man caught in a Matrix-like conspiracy also confirmed that several prompts from ChatGPT included directing him to take drastic measures to purchase a $20 premium subscription to the service. GPT-4o, like all LLMs, is a language model that predicts its responses based on billions of training data points from a litany of other written works. It is factually impossible for an LLM to gain sentience. However, it is highly possible and likely for the same model to "hallucinate" or make up false information and sources out of seemingly nowhere. GPT-4o, for example, does not have the memory or spatial awareness to beat an Atari 2600 at its first level of chess. ChatGPT has previously been found to have contributed to major tragedies, including being used to plan the Cybertruck bombing outside a Las Vegas Trump hotel earlier this year. And today, American Republican lawmakers are pushing a 10-year ban on any state-level AI restrictions in a controversial budget bill. ChatGPT, as it exists today, may not be a safe tool for those who are most mentally vulnerable, and its creators are lobbying for even less oversight, allowing such disasters to potentially continue unchecked.

AI tools collect and store data about you from all your devices. Be aware of what you're revealing.
AI tools collect and store data about you from all your devices. Be aware of what you're revealing.

Yahoo

timean hour ago

  • Yahoo

AI tools collect and store data about you from all your devices. Be aware of what you're revealing.

AI tools gather information about you from many types of devices, including smartphones. (Prostock-Studio/Getty Images) Like it or not, artificial intelligence has become part of daily life. Many devices — including electric razors and toothbrushes — have become 'AI-powered,' using machine learning algorithms to track how a person uses the device, how the device is working in real time, and provide feedback. From asking questions to an AI assistant like ChatGPT or Microsoft Copilot to monitoring a daily fitness routine with a smartwatch, many people use an AI system or tool every day. While AI tools and technologies can make life easier, they also raise important questions about data privacy. These systems often collect large amounts of data, sometimes without people even realizing their data is being collected. The information can then be used to identify personal habits and preferences, and even predict future behaviors by drawing inferences from the aggregated data. As an assistant professor of cybersecurity at West Virginia University, I study how emerging technologies and various types of AI systems manage personal data and how we can build more secure, privacy-preserving systems for the future. Generative AI software uses large amounts of training data to create new content such as text or images. Predictive AI uses data to forecast outcomes based on past behavior, such as how likely you are to hit your daily step goal, or what movies you may want to watch. Both types can be used to gather information about you. Generative AI assistants such as ChatGPT and Google Gemini collect all the information users type into a chat box. Every question, response and prompt that users enter is recorded, stored and analyzed to improve the AI model. OpenAI's privacy policy informs users that 'we may use content you provide us to improve our Services, for example to train the models that power ChatGPT.' Even though OpenAI allows you to opt out of content use for model training, it still collects and retains your personal data. Although some companies promise that they anonymize this data, meaning they store it without naming the person who provided it, there is always a risk of data being reidentified. Beyond generative AI assistants, social media platforms like Facebook, Instagram and TikTok continuously gather data on their users to train predictive AI models. Every post, photo, video, like, share and comment, including the amount of time people spend looking at each of these, is collected as data points that are used to build digital data profiles for each person who uses the service. The profiles can be used to refine the social media platform's AI recommender systems. They can also be sold to data brokers, who sell a person's data to other companies to, for instance, help develop targeted advertisements that align with that person's interests. Many social media companies also track users across websites and applications by putting cookies and embedded tracking pixels on their computers. Cookies are small files that store information about who you are and what you clicked on while browsing a website. One of the most common uses of cookies is in digital shopping carts: When you place an item in your cart, leave the website and return later, the item will still be in your cart because the cookie stored that information. Tracking pixels are invisible images or snippets of code embedded in websites that notify companies of your activity when you visit their page. This helps them track your behavior across the internet. This is why users often see or hear advertisements that are related to their browsing and shopping habits on many of the unrelated websites they browse, and even when they are using different devices, including computers, phones and smart speakers. One study found that some websites can store over 300 tracking cookies on your computer or mobile phone. Like generative AI platforms, social media platforms offer privacy settings and opt-outs, but these give people limited control over how their personal data is aggregated and monetized. As media theorist Douglas Rushkoff argued in 2011, if the service is free, you are the product. Many tools that include AI don't require a person to take any direct action for the tool to collect data about that person. Smart devices such as home speakers, fitness trackers and watches continually gather information through biometric sensors, voice recognition and location tracking. Smart home speakers continually listen for the command to activate or 'wake up' the device. As the device is listening for this word, it picks up all the conversations happening around it, even though it does not seem to be active. Some companies claim that voice data is only stored when the wake word — what you say to wake up the device — is detected. However, people have raised concerns about accidental recordings, especially because these devices are often connected to cloud services, which allow voice data to be stored, synced and shared across multiple devices such as your phone, smart speaker and tablet. If the company allows, it's also possible for this data to be accessed by third parties, such as advertisers, data analytics firms or a law enforcement agency with a warrant. This potential for third-party access also applies to smartwatches and fitness trackers, which monitor health metrics and user activity patterns. Companies that produce wearable fitness devices are not considered 'covered entities' and so are not bound by the Health Information Portability and Accountability Act. This means that they are legally allowed to sell health- and location-related data collected from their users. Concerns about HIPAA data arose in 2018, when Strava, a fitness company released a global heat map of user's exercise routes. In doing so, it accidentally revealed sensitive military locations across the globe through highlighting the exercise routes of military personnel. The Trump administration has tapped Palantir, a company that specializes in using AI for data analytics, to collate and analyze data about Americans. Meanwhile, Palantir has announced a partnership with a company that runs self-checkout systems. Such partnerships can expand corporate and government reach into everyday consumer behavior. This one could be used to create detailed personal profiles on Americans by linking their consumer habits with other personal data. This raises concerns about increased surveillance and loss of anonymity. It could allow citizens to be tracked and analyzed across multiple aspects of their lives without their knowledge or consent. Some smart device companies are also rolling back privacy protections instead of strengthening them. Amazon recently announced that starting on March 28, 2025, all voice recordings from Amazon Echo devices would be sent to Amazon's cloud by default, and users will no longer have the option to turn this function off. This is different from previous settings, which allowed users to limit private data collection. Changes like these raise concerns about how much control consumers have over their own data when using smart devices. Many privacy experts consider cloud storage of voice recordings a form of data collection, especially when used to improve algorithms or build user profiles, which has implications for data privacy laws designed to protect online privacy. All of this brings up serious privacy concerns for people and governments on how AI tools collect, store, use and transmit data. The biggest concern is transparency. People don't know what data is being collected, how the data is being used, and who has access to that data. Companies tend to use complicated privacy policies filled with technical jargon to make it difficult for people to understand the terms of a service that they agree to. People also tend not to read terms of service documents. One study found that people averaged 73 seconds reading a terms of service document that had an average read time of 29-32 minutes. Data collected by AI tools may initially reside with a company that you trust, but can easily be sold and given to a company that you don't trust. AI tools, the companies in charge of them and the companies that have access to the data they collect can also be subject to cyberattacks and data breaches that can reveal sensitive personal information. These attacks can by carried out by cybercriminals who are in it for the money, or by so-called advanced persistent threats, which are typically nation/state- sponsored attackers who gain access to networks and systems and remain there undetected, collecting information and personal data to eventually cause disruption or harm. While laws and regulations such as the General Data Protection Regulation in the European Union and the California Consumer Privacy Act aim to safeguard user data, AI development and use have often outpaced the legislative process. The laws are still catching up on AI and data privacy. For now, you should assume any AI-powered device or platform is collecting data on your inputs, behaviors and patterns. Although AI tools collect people's data, and the way this accumulation of data affects people's data privacy is concerning, the tools can also be useful. AI-powered applications can streamline workflows, automate repetitive tasks and provide valuable insights. But it's crucial to approach these tools with awareness and caution. When using a generative AI platform that gives you answers to questions you type in a prompt, don't include any personally identifiable information, including names, birth dates, Social Security numbers or home addresses. At the workplace, don't include trade secrets or classified information. In general, don't put anything into a prompt that you wouldn't feel comfortable revealing to the public or seeing on a billboard. Remember, once you hit enter on the prompt, you've lost control of that information. Remember that devices which are turned on are always listening — even if they're asleep. If you use smart home or embedded devices, turn them off when you need to have a private conversation. A device that's asleep looks inactive, but it is still powered on and listening for a wake word or signal. Unplugging a device or removing its batteries is a good way of making sure the device is truly off. Finally, be aware of the terms of service and data collection policies of the devices and platforms that you are using. You might be surprised by what you've already agreed to. This article is republished from The Conversation under a Creative Commons license. Read the original article.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store