
OpenAI Adds Shopping to ChatGPT in a Challenge to Google
OpenAI is launching a shopping experience inside of ChatGPT, complete with product picks and buy buttons. WIRED spoke with Adam Fry, the company's search product lead, to ask how it all works. Photo-Illustration:OpenAI announced today that users will soon be able to buy products through ChatGPT. The rollout of shopping buttons for AI-powered search queries will come to everyone, whether they are a signed-in user or not. Shoppers will not be able to check out inside of ChatGPT; instead they will be redirected to the merchant's website to finish the transaction.
In a prelaunch demo for WIRED, Adam Fry, the ChatGPT search product lead at OpenAI, demonstrated how the updated user experience could be used to help people using the tool for product research decide which espresso machine or office chair to buy. The product recommendations shown to prospective shoppers are based on what ChatGPT remembers about a user's preferences as well as product reviews pulled from across the web.
Fry says ChatGPT users are already running over a billion web searches per week, and that people are using the tool to research a wide breadth of shopping categories, like beauty, home goods, and electronics. The product results in ChatGPT for best office chairs, one of WIRED's rigorously tested and widely read buying guides, included a link to our reporting in the sources tab. (Although the business side of Conde Nast, WIRED's parent company, signed a licensing deal last year with OpenAI so the company can surface our content, the editorial team retains independence in how we cover the startup.)
Searching for espresso machines inside ChatGPT. Image Courtesy of OpenAI
The new user experience of buying stuff inside of ChatGPT shares many similarities to Google Shopping. In the interfaces of both, when you click on the image of a budget office chair that tickles your fancy, multiple retailers, like Amazon and Walmart, are listed on the right side of the screen, with buttons for completing the purchase. There is one major difference between shopping through ChatGPT versus Google, for now: the results you see in OpenAI searches are not paid placements, but organic results. 'They are not ads,' says Fry. 'They are not sponsored.'
While some product recommendations that appear inside of Google Shopping show up because retailers paid for them to be there, that's just one mechanism Google uses to decide which products to list in Shopping searches. Websites that publish product reviews are constantly tweaking the content of their buying recommendations in an effort to convince the opaque Google algorithm that the website includes high quality reviews of products that have been thoroughly tested by real humans. Google favors those more considered reviews in search results and will rank them highly when a user is researching a product. To land one of the top spots in a Google search can lead to more of those users buying the product through the website, potentially earning the publisher millions of dollars in affiliate revenue.
So, how does ChatGPT choose which products to recommend? Why were those specific espresso machines and office chairs listed first when the user typed the prompt?
'It's not looking for specific signals that are in some algorithm,' says Fry. According to him, this will be a shopping experience that's more personalized and conversational, rather than keyword-focused. 'It's trying to understand how people are reviewing this, how people are talking about this, what the pros and cons are,' says Fry. If you say that you prefer only buying black clothes from a specific retailer, then ChatGPT will supposedly store that information in its memory the next time you ask for advice about what shirt to buy, giving you recommendations that align with your tastes.
The reviews that ChatGPT features for products will pull from a blend of online sources, including editorial publishers like WIRED as well as user-generated forums like Reddit. Fry says that users can tell ChatGPT which types of reviews to prioritize when curating a list of recommended products.
One of the most pressing questions for online publishers with this new release is likely how affiliate revenue will work in this situation. Currently, if you read WIRED's review of the best office chairs and decide to purchase one through our link, we get a cut of the revenue and it supports our journalism. How will affiliate revenue work inside of ChatGPT shopping when the tool recommends an office chair that OpenAI knows is a good pick because WIRED, among others, gave it a good review?
'We are going to be experimenting with a whole bunch of different ways that this can work,' says Fry. He didn't share specific plans, saying that providing high quality recommendations is OpenAI's first priority right now, and that the company might try different affiliate revenue models in the future.
When asked if he sees this as potentially a meaningful revenue drive in the long-term, Fry similarly says that OpenAI is just focused on the user experience first and will iterate on ChatGPT shopping as the startup learns more post-release. OpenAI has big revenue goals; according to reporting from The Information, the company expects to bring in $125 billion in revenue by 2029. Last year, OpenAI had just under $4 billion in revenue. It's unclear how big of part the company expects affiliate revenue to play in reaching that goal. CEO Sam Altman floated the idea of affiliate fees adding to the company's revenue, in a recent interview with Stratechery newsletter writer Ben Thomspon.
This is not the first shopping adjacent release from OpenAI in 2025. Its AI agent, called Operator, can take control of web browsers and click around, potentially helping users buy groceries or assist with vacation booking, though my initial impressions found the feature to be fairly clunky at release. Perplexity, one of OpenAI's competitors in AI-powered search, launched 'Buy with Pro' late last year, where users could also shop directly inside of the app. Additionally, the Google Shopping tab currently includes a 'Researched with AI' section for some queries, with summaries of online reviews as well as recommended picks.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
11 minutes ago
- Yahoo
Amazon wants to become a global marketplace for AI
Amazon Web Services isn't betting on one large language model (LLM) winning the artificial intelligence race. Instead, it's offering customers a buffet of models to choose from. AWS, the cloud computing arm of Amazon (AMZN), aims to become the go-to infrastructure layer for the AI economy, regardless of which model wins out. By making customer choice a defining principle, AWS hopes to win out against rivals that have aligned closely with specific LLM providers — notably Microsoft (MSFT), which partnered with ChatGPT creator OpenAI ( 'We don't think that there's going to be one model to rule them all,' Dave Brown, vice president of compute and networking at AWS, told Yahoo Finance. The model-neutral approach is embedded into Amazon Bedrock, a service that allows AWS customers to build their own applications using a wide range of models, with more than 100 to choose from. Brown added that after Chinese startup DeepSeek surprised the world, AWS had a fully managed version of the disruptive model available on Bedrock within a week. Two years after its launch, Bedrock is now the fastest-growing service offered by AWS, which accounted for over 18% of Amazon's total revenue in the first quarter. It's why Amazon CEO Andy Jassy sees Bedrock as a core part of the company's AI growth strategy. But to understand the competitive advantage AWS hopes to offer with Bedrock, you have to go back to its origin story. Bedrock dates back to a six-page internal memo that Atul Deo, AWS's director of product management, wrote in 2020. Before OpenAI's ChatGPT launched in 2022 and made 'generative AI' a household term, Deo pitched a service that could generate code from plain English prompts using large language models. But Jassy, the head of AWS at the time, didn't buy it. 'His initial reaction was, 'This seems almost like a pipe dream,'' Deo said. He added that while a tool that makes coding easy sounds obvious now, the technology was 'still not quite there.' When that project, initially known as Code Whisperer, launched in 2023, the team realized they could offer the service for a broader set of use cases, giving customers a choice of different models with 'generic capabilities' that 'could be used as a foundation to build a lot of interesting applications,' according to Deo. Deo noted that the team steered away from doubling down on its own model after it recognized a pattern of customers wanting choice in other AWS services. This led to AWS becoming the first provider to offer a range of different models to customers. With this foundational approach in mind, Amazon renamed the project Bedrock. To be sure, the model-agnostic approach has risks, and many analysts don't consider Amazon to be leading the AI race, even though it has ramped up its AI spending. If there is ultimately one model to rule them all, similar to how Google came to dominate search, Amazon could risk further falling behind. At the beginning of the year, Amazon and its peers Meta (META), Microsoft, and Google parent Alphabet (GOOG) expected to spend $325 billion combined, mostly on AI infrastructure. To keep pace, Amazon has hedged its bets with its own technology and one LLM provider in particular: Anthropic. In November 2024, AWS doubled its investment in Anthropic to $8 billion in a deal that requires Anthropic to train its large language model, Claude, using only AWS's chips. (For comparison, Microsoft has invested over $13 billion into OpenAI.) The $8 billion deal allows Amazon to prove out its AI training infrastructure and deepen ties with one LLM provider while continuing to offer customers a wide selection of models on Bedrock. 'I mean, this is cloud selling 101, right?' said Dan Rosenthal, head of go-to-market partnerships at Anthropic. 'There are some cases where it's been very clear that a customer wants to use a different model on Bedrock for something that we just frankly don't focus on, and that's great. We want to win where we have a right to win.' Amazon also launched its own family of foundational models, called Nova, at the end of 2024, two years after the launch of ChatGPT. But competition and expectations remain high: Revenue at AWS increased 16.9% to $29.27 billion in Q1, marking the third time in a row it missed analyst estimates despite double-digit growth. The Anthropic partnership also underscores a bigger competition AWS may be fighting with chipmakers, including Nvidia (NVDA), which recently staged a $1 trillion rally in just two months after an earnings print that eased investor concerns about chip export controls. While Amazon is an Nvidia customer, it also produces highly effective and more affordable AI chips based on power consumed (known as 'price performance'). On Bedrock, AWS lets clients choose whether to use its own CPUs and GPUs or chips from competitors like Intel (INTC), AMD (AMD), and Nvidia. 'We're able to work with the model providers to really optimize the model for the hardware that it runs,' Brown said. 'There's no change the customer has to make.' Customers not only have a choice of model but also a choice of which infrastructure the model should run and train on. This helps AWS compete on price — a key battleground with Nvidia, which offers the most expensive chips on the market. This 'coopetition' dynamic could position Amazon to take market share from Nvidia if it can prove its own chips can do the job for a lower sticker price. It's a bet that Amazon is willing to spend on, with capital expenditures expected to hit $100 billion in 2025, up from $83 billion last year. While AWS doesn't break out its costs for AI, CEO Andy Jassy said on an earnings call in February that the 'vast majority of that capex spend is on AI for AWS.' In an April letter to shareholders, Jassy noted that 'AI revenue is growing at triple-digit YoY percentages and represents a multibillion-dollar annual revenue run rate.' Sign in to access your portfolio

Yahoo
16 minutes ago
- Yahoo
OpenAI claims to have hit $10B in annual revenue
OpenAI says it recently hit $10 billion in annual recurring revenue, up from around $5.5 billion last year. That figure includes revenue from the company's consumer products, ChatGPT business products, and its API, an OpenAI spokesperson told CNBC. Currently, OpenAI is serving more than 500 million weekly active users and 3 million paying business customers. The revenue milestone comes roughly two and a half years after OpenAI launched its popular chatbot platform, ChatGPT. The company is targeting $125 billion in revenue by 2029. OpenAI is under some pressure to increase revenue quickly. The company burns billions of dollars each year hiring and recruiting talent to work on its AI products, and securing the necessary infrastructure to train and run AI systems. OpenAI has not disclosed its operating expenses or whether it is close to profitability. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
17 minutes ago
- Yahoo
Arkansas Supreme Court releases proposed rule for artificial intelligence
The Arkansas Supreme Court building in Little Rock. (John Sykes/Arkansas Advocate) The use of artificial intelligence in legal documents could violate Arkansas law or court rules, according to a proposed administrative order issued by the state Supreme Court last week. Specifically, the proposed order addresses the use of confidential court data with generative artificial intelligence. AI models retain data inputted by users of AI products, such as ChatGPT, in order to continue training the large language models that exploded into public use only a few years ago, the order notes. 'Anyone who either intentionally or inadvertantly [sic] discloses confidential or sealed information related to a client or case [to a generative AI model] may be violating established rules,' the proposed order reads, specifically citing Arkansas Supreme Court Administrative Order Number 19, the Arkansas Rules of Professional Conduct and the Arkansas Code of Judicial Conduct. Additionally, the proposed order prohibits anyone with internal access to the state's court system, CourtConnect, from 'intentionally exposing our state courts' internal data to a GAI.' The proposed order provides an exemption to this prohibition if approval is granted by the Supreme Court's Automation Committee to engage in 'a research and analysis project related to the use of generative AI tools and general AI for the benefit of our courts.' The proposed order does not appear to address questions of broader use of AI by attorneys within the state court system. Judges in courtrooms across the country in recent months have expressed frustration with attorneys who have filed briefs and other documents bearing citations to nonexistent or irrelevant cases as a result of so-called 'AI hallucinations,' leading to sanctions in some cases. As reported by the Alabama Reflector, for example, lawyers who were being paid millions by the Alabama Department of Corrections to defend it against lawsuits filed by prisoners in the state system were called out by an inmate's attorneys for making up legal citations 'out of whole cloth' in a lawsuit where their client alleged being stabbed repeatedly while in restraints. The federal judge presiding over the case said that the incident showed that sanctions levied by other courts had proven 'insufficient' to deter lawyers from filing documents with improper or made up citations created by AI. 'That causes me to consider a fuller range of sanctions,' Judge Anna M. Manasco said. The Arkansas Supreme Court Committee on Automation created a subcommittee to 'study the use of AI in the courts.' The introduction to the proposed order notes that as the committee continues its work, it will make recommendations. The comment period for the proposed administrative order ends on Aug. 1. SUBSCRIBE: GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX