logo
AI personal shoppers hunt down bargain buys

AI personal shoppers hunt down bargain buys

Sinar Daily2 days ago

NEW YORK – Internet giants are diving deeper into e-commerce with digital aides that know shoppers' preferences, let them virtually try on clothes, hunt for deals, and even place orders.
The rise of virtual personal shoppers stems from generative artificial intelligence (AI) being deployed in "agents" that specialise in specific tasks and are granted autonomy to complete them independently.
'This is basically the next evolution of shopping experiences,' said CFRA Research analyst Angelo Zino.
Google last week unveiled shopping features built into a new 'AI Mode'.
It can take a person's own photo and blend it with that of a skirt, shirt, or other piece of clothing spotted online, showing how it would look on them.
The AI adjusts the clothing size to fit, accounting for how fabrics drape, according to Google's Head of Advertising and Commerce, Vidhya Srinivasan.
Shoppers can then set the price they are willing to pay and leave the AI to tirelessly browse the internet for a deal — alerting the shopper when one is found, and asking whether it should proceed with the purchase using Google's payment platform.
'They're taking on Amazon a little bit,' said Techsponential analyst Avi Greengart of Google.
The tool is also a way to monetise AI by increasing online traffic and opportunities to display ads, Greengart added.
The Silicon Valley tech titan did not respond to a query regarding whether it is sharing revenue from shopping transactions.
Bartering bots? OpenAI added a shopping feature to ChatGPT earlier this year, enabling the chatbot to respond to requests with product suggestions, consumer reviews, and links to merchant websites.
Perplexity AI began allowing subscribers to pay for online purchases without leaving its app late last year.
In April, Amazon introduced a 'Buy for Me' mode to its Rufus digital assistant, enabling users to command it to make purchases on retailer websites outside Amazon's own platform.
Walmart's Head of Technology, Hari Vasudev, recently spoke about adding an AI agent to the retail behemoth's online shopping portal, while also working with partners to ensure their digital agents prioritise Walmart products.
Global payment networks Visa and Mastercard both announced in April that their systems had been modernised to enable payment transactions by digital agents.
'As AI agents start to take over the bulk of product discovery and the decision-making process, retailers must consider how to optimise for this new layer of AI shoppers,' said Elise Watson of Clarkston Consulting.
Retailers are likely to be left in the dark when it comes to what makes a product attractive to AI agents, according to Watson.
Knowing the customer Zino does not expect AI shoppers to trigger an upheaval in the e-commerce industry, but he does see the technology benefiting Google and Meta.
Not only do the internet rivals possess vast amounts of data about their users, but they are also among the frontrunners in the AI race.
'They probably have more information on the consumer than anyone else out there,' Zino said of Google and Meta.
Technology firms' access to user data touches on the hot-button issue of online privacy and who should control personal information.
Google plans to refine consumer profiles based on search activity and promises that shoppers will need to authorise access to additional information, such as emails or app usage.
Trusting a chatbot with purchasing decisions may alarm some users, and while the technology may be in place, the legal and ethical framework is not yet fully developed.
'The agent economy is here,' said PSE Consulting Managing Director Chris Jones.
'The next phase of e-commerce will depend on whether we can trust machines to buy on our behalf.' - AFP

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Workjam taps Google Cloud to boost frontlinE AI capabilities
Workjam taps Google Cloud to boost frontlinE AI capabilities

The Sun

time10 hours ago

  • The Sun

Workjam taps Google Cloud to boost frontlinE AI capabilities

KUALA LUMPUR: WorkJam, the world's leading digital frontline workplace, has expanded its strategic collaboration with Google Cloud to deliver enterprise-ready artificial intelligence (AI) solutions built specifically for frontline teams. WorkJam already offers AI Agent, powered by Google's Gemini models and cloud infrastructure, which is revolutionising how frontline organisations operate, bringing intelligent automation, productivity, and engagement to a new level, and this feature will be the first of an innovative roadmap that will be AI-driven. 'By integrating the powerful reasoning capabilities of Google's Gemini models into the WorkJam platform, we are redefining labour utilisation for frontline teams across the APAC region. 'With Gemini models, WorkJam ensures the right people are in the right place at the right time, improving operational efficiency while creating a more engaging and productive work environment,' said WorkJam Chief Executive Officer, Steven Kramer in a statement. This collaboration enables WorkJam to co-innovate alongside Google Cloud, gaining early access to AI advancements and accelerating delivery of cutting-edge capabilities to global customers. Leveraging Google's Gemini models, WorkJam delivers intelligent and conversational AI that understands text, voice, and visual inputs, hence creating more natural, intuitive user experiences for frontline employees. From task automation and knowledge retrieval to in-the-moment support, WorkJam's AI Agent supercharges productivity and reduces friction in everyday operations. WorkJam's vision is to continue to transform the frontline experience by integrating Google's Gemini models to deliver real-time intelligence, automation, and intuitive support directly into daily workflows. By combining voice, text, and visual understanding, WorkJam empowers frontline employees with instant access to knowledge, adaptive training, and seamless task execution, reducing friction, streamlining operations, and unburdening managers to focus on high-value work.

AI ‘vibe coding' startups burst onto scene with sky-high valuations
AI ‘vibe coding' startups burst onto scene with sky-high valuations

The Star

time11 hours ago

  • The Star

AI ‘vibe coding' startups burst onto scene with sky-high valuations

NEW YORK, NY (Reuters) -Two years after the launch of ChatGPT, return on investment in generative AI has been elusive, but one area stands out: software development. So-called code generation or 'code-gen' startups are commanding sky-high valuations as corporate boardrooms look to use AI to aid, and sometimes to replace, expensive human software engineers. Cursor, a code generation startup based in San Francisco that can suggest and complete lines of code and write whole sections of code autonomously, raised $900 million at a $10 billion valuation in May from a who's who list of tech investors, including Thrive Capital, Andreessen Horowitz and Accel. Windsurf, a Mountain View-based startup behind the popular AI coding tool Codeium, attracted the attention of ChatGPT maker OpenAI, which is now in talks to acquire the company for $3 billion, sources familiar with the matter told Reuters. Its tool is known for translating plain English commands into code, sometimes called 'vibe coding,' which allows people with no knowledge of computer languages to write software. OpenAI and Windsurf declined to comment on the acquisition. 'AI has automated all the repetitive, tedious work,' said Scott Wu, CEO of code gen startup Cognition. 'The software engineer's role has already changed dramatically. It's not about memorizing esoteric syntax anymore.' Founders of code-gen startups and their investors believe they are in a land grab situation, with a shrinking window to gain a critical mass of users and establish their AI coding tool as the industry standard. But because most are built on AI foundation models developed elsewhere, such as OpenAI, Anthropic, or DeepSeek, their costs per query are also growing, and none are yet profitable. They're also at risk of being disrupted by Google, Microsoft and OpenAI, which all announced new code-gen products in May, and Anthropic is also working on one as well, two sources familiar with the matter told Reuters. The rapid growth of these startups is coming despite competing on big tech's home turf. Microsoft's GitHub Copilot, launched in 2021 and considered code-gen's dominant player, grew to over $500 million in revenue last year, according to a source familiar with the matter. Microsoft declined to comment on GitHub Copilot's revenue. On Microsoft's earnings call in April, the company said the product has over 15 million users. LEARN TO CODE? As AI revolutionizes the industry, many jobs - particularly entry-level coding positions that are more basic and involve repetition - may be eliminated. Signalfire, a VC firm that tracks tech hiring, found that new hires with less than a year of experience fell 24% in 2024, a drop it attributes to tasks once assigned to entry-level software engineers are now being fulfilled in part with AI. Google's CEO also said in April that 'well over 30%' of Google's code is now AI-generated, and Amazon CEO Andy Jassy said last year the company had saved 'the equivalent of 4,500 developer-years' by using AI. Google and Amazon declined to comment. In May, Microsoft CEO Satya Nadella said at a conference that approximately 20 to 30% of their code is now AI-generated. The same month, the company announced layoffs of 6,000 workers globally, with over 40% of those being software developers in Microsoft's home state, Washington. 'We're focused on creating AI that empowers developers to be more productive, creative, and save time,' a Microsoft spokesperson said. 'This means some roles will change with the revolution of AI, but human intelligence remains at the center of the software development life cycle.' MOUNTING LOSSES Some 'vibe-coding' platforms already boast substantial annualized revenues. Cursor, with just 60 employees, went from zero to $100 million in recurring revenue by January 2025, less than two years since its launch. Windsurf, founded in 2021, launched its code generation product in November 2024 and is already bringing in $50 million in annualized revenue, according to a source familiar with the company. But both startups operate with negative gross margins, meaning they spend more than they make, according to four investor sources familiar with their operations. 'The prices people are paying for coding assistants are going to get more expensive,' Quinn Slack, CEO at coding startup Sourcegraph, told Reuters. To make the higher cost an easier pill to swallow for customers, Sourcegraph is now offering a drop-down menu to let users choose which models they want to work with, from open source models such as DeepSeek to the most advanced reasoning models from Anthropic and OpenAI so they can opt for cheaper models for basic questions. Both Cursor and Windsurf are led by recent MIT graduates in their twenties, and exemplify the gold rush era of the AI startup scene. 'I haven't seen people working this hard since the first Internet boom,' said Martin Casado, a general partner at Andreessen Horowitz, an investor in Anysphere, the company behind Cursor. What's less clear is whether the dozen or so code-gen companies will be able to hang on to their customers as big tech moves in. 'In many cases, it's less about who's got the best technology -- it's about who is going to make the best use of that technology, and who's going to be able to sell their products better than others,' said Scott Raney, managing director at Redpoint Ventures, whose firm invested in Sourcegraph and Poolside, a software development startup that's building its own AI foundation model. CUSTOM AI MODELS Most of the AI coding startups currently rely on the Claude AI model from Anthropic, which crossed $3 billion in annualized revenue in May in part due to fees paid by code-gen companies. But some startups are attempting to build their own models. In May, Windsurf announced its first in-house AI models that are optimized for software engineering in a bid to control the user experience. Cursor has also hired a team of researchers to pre-train its own large frontier-level models, which could enable the company to not have to pay foundation model companies so much money, according to two sources familiar with the matter. Startups looking to train their own AI coding models face an uphill battle as it could easily cost millions to buy or rent the computing capacity needed to train a large language model. Replit earlier dropped plans to train its own model. Poolside, which has raised more than $600 million to make a coding-specific model, has announced a partnership with Amazon Web Services and is testing with customers, but hasn't made any product generally available yet. Another code gen startup Magic Dev, which raised nearly $500 million since 2023, told investors a frontier-level coding model was coming in summer 2024 but hasn't yet launched a product. Poolside declined to comment. Magic Dev did not respond to a request for comment. (Reporting by Anna Tong and Krystal Hu in New York. Editing by Kenneth Li and Michael Learmonth)

Contradictheory: AI and the next generation
Contradictheory: AI and the next generation

The Star

time11 hours ago

  • The Star

Contradictheory: AI and the next generation

Here's a conversation I don't think we'd have heard five years ago: 'You know what they do? They send in their part of the work, and it's so obviously ChatGPT. I had to rewrite the whole thing!' This wasn't a chat I had with the COO of some major company but with a 12-year-old child. She was talking about a piece of group work they had to do for class. And this Boy as she called him (you could hear the capitalised italics in her voice) had waited until the last minute to submit his part. To be honest, I shouldn't be surprised. These days, lots of people use AI in their work. It's normal. According to the 2024 Work Trend Index released by Microsoft and LinkedIn, 75% of employees then used artificial intelligence (AI) to save time and focus on their most important tasks. But it's not without its problems. An adult using AI to help draft an email is one thing. A student handing in their weekly assignment is another. The adult uses AI to communicate more clearly, but there the student is taking a shortcut. So, in an effort to deliver better work, the child might actually be learning less. And it's not going away. A 2024 study by Impact Research for the Walton Family Foundation found that 48% of students use ChatGPT at least weekly, representing a jump of 27 percentage points over 2023. And more students use AI chatbots to write essays and assignments (56%) than to study for tests and quizzes (52%). So what about the other students that don't use AI, like the girl I quoted above? I find they often take a rather antagonistic view. Some kids I talk to (usually the ones already doing well in class) seem to look down on classmates who use AI and, in the process, they look down on AI to do their homework as well. And I think that's wrong. As soon as I learned about ChatGPT, I felt that the key to using AI tools well is obvious. It lies in its name: tools. Like a ruler for drawing straight lines, or a dictionary for looking up words, AI chatbots are tools, only more incredibly versatile ones. One of the biggest problems, of course, is that AI chatbots don't always get their facts right (in AI parlance, they 'hallucinate'). So if you ask it for an essay on 'fastest marine mammal', there's a chance it'll include references to 'sailfish' and 'peregrine falcon'. In one test of AI chatbots, hallucination rates for newer AI systems were as high as 79%. Even OpenAI, the company behind ChatGPT, isn't immune. Their o3 release hallucinated 33% of the time in their PersonQA benchmark test, which measures how well it answers questions about public figures. The new o4-mini performed even worse, hallucinating 48% of the time. There are ways to work around this, but I think most people don't know them. For example, many chatbots now have a 'Deep Research' mode that actively searches the internet and presents answers along with sources. The thing about this is that you, the reasonable, competent, and capable human being, can check the original source to see if it's something you trust. Instead of the machine telling you what it 'knows', it tells you what it found, and it's up to you to verify it. Another method is to feed the chatbot the materials you want it to use, like a PDF of your textbook or a research paper. Google's NotebookLM is designed for this. It only works with the data you supply, drastically reducing hallucinations. You can then be more sure of the information it produces. In one stroke, you've turned the chatbot into a hyper-intelligent search engine that not only finds what you're looking for but also understands context, identifies patterns, and helps organise the information. That's just a small part of what AI can do. But even just helping students find and organise information better is a huge win. And ideally, teachers should lead the charge in classrooms, guiding students on how to work with AI responsibly and effectively. Instead, many feel compelled to ban it or to try to 'AI-proof' assignments, for example, by demanding handwritten submissions or choosing topics that chatbots are more likely to hallucinate on. But we can do better. We should allow AI in and teach students how to use it in a way that makes them better. For example, teachers could say that the 'slop' AI generates is the bare minimum. Hand it in as-is, and you'll scrape a C or D. But if you use it to refine your thoughts, to polish your voice, to spark better ideas, then that's where the value lies. And students can use it to help them revise by getting it to generate quizzes to test themselves with (they, of course, have to verify the answers the AI gives are correct). Nevertheless, what I've written about so far is about using AI as a tool. The future is about using it as a collaborator. Right now, according to the 2025 Microsoft Work Trend Index, while 50% see it as a command-based tool, 48% of Malaysian workers treat AI as a thought partner. The former issues basic instructions, while the latter has conversations and you have human-machine collaboration. The report goes on to say explicitly that this kind of partnership is what all employees should strive for when working with AI. That means knowing how to iterate the output given, when to delegate, when to refine the results, and when to push back. In short: the same skills we want kids to learn anyway when working with classmates and teachers. And the truth is that while I've used AI to find data, summarise reports, and – yes – to proofread this article, I haven't yet actively collaborated with AI. However, the future seems to be heading in that direction. Just a few weeks ago, I wrote about mathematician Terence Tao who predicts that it won't be long until computer proof assistants powered by AI may be cited as co-authors on mathematics papers. Clearly, I still have a lot to learn about using AI day-to-day. And it's hard. It involves trial and error and wasted effort while battling with looming deadlines. I may deliver inferior work in the meantime that collaborators may have to rewrite. But I remain, as ever, optimistic. Because technology – whether as a tool or a slightly eccentric collaborator – has ultimately the potential to make us and our work better. Logic is the antithesis of emotion but mathematician-turned-scriptwriter Dzof Azmi's theory is that people need both to make sense of life's vagaries and contradictions. Write to Dzof at lifestyle@ The views expressed here are entirely the writer's own.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store