Google reacts to questionable shopping Chrome extensions
Google has updated its policies for Chrome extensions following a controversy over the Honey extension. The extension, from PayPal, has been accused by creators of misappropriating affiliate links without its users' knowledge, and Google now specifies that similar extensions are not permitted on its Chrome Web Store.
The issue regarding Honey came to light in December 2024, when YouTubers accused the extension of being a scam. The extension claimed to search through discount codes and automatically apply them to user's shopping baskets across many different websites. However, it has been accused of injecting its own affiliate links into users' purchases without their knowledge, taking revenue from content creators who also use affiliate links. In an ironic twist, this likely negatively affected the same tech influencers that Honey paid to promote its extension.
Now, Google has updated its Chrome extension policy to clarify that isn't allowed. 'Affiliate links, codes, or cookies must only be included when the extension provides a direct and transparent user benefit related to the extension's core functionality. It is not permitted to inject affiliate links without related user action and without providing a tangible benefit to users,' Google wrote.
To make it extra clear, Google also listed out the kinds of violations that would be forbidden, including 'An extension that updates a shopping-related cookie without the user's knowledge while the user is browsing shopping sites,' 'An extension that appends an affiliate code to the URL or replaces an existing affiliate code in the URL without the user's explicit knowledge or related user action,' and 'An extension that applies or replaces affiliate promo codes without the user's explicit knowledge or related user action.'
However, at time of writing, the Honey extension is still available for download in the Chrome Web Store. It's not yet clear if Honey has updated its extension to comply with the policy or whether it will have to make changes to remain on the Web Store.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
2 hours ago
- Yahoo
Apple's Siri Could Be More Like ChatGPT. But Is That What You Want?
I've noticed a vibe shift in the appetite for AI on our devices. My social feeds are flooded with disgust over what's being created by Google's AI video generator tool, Veo 3. The unsettling realistic video of fake people and voices it creates makes it clear we will have a hard time telling apart fiction from reality. In other words, the AI slop is looking less sloppy. Meanwhile, the CEO of Anthropic is warning people that AI will wipe out half of all entry-level white-collar jobs. In an interview with Axios, Dario Amodei is suggesting government needs to step in to protect us from a mass elimination of jobs that can happen very rapidly. So as we gear up for Apple's big WWDC presentation on Monday, I have a different view of headlines highlighting Apple being behind in the AI race. I wonder, what exactly is the flavor of AI that people want or need right now? And will it really matter if Apple keeps waiting longer to push out it's long promised (and long delayed) personalized Siri when people are not feeling optimistic about AI's impact on our society? In this week's episode of One More Thing, which you can watch embedded above, I go over some of the recent reporting from Bloomberg that discusses leadership changes on the Siri team, and how there are different views in what consumers want out of Siri. Should Apple approach AI in a way to make Siri into a home-grown chatbot, or just make it a better interface for controlling devices? (Maybe a bit of both.) I expect a lot of griping after WWDC about the state of Siri and Apple's AI, with comparisons to other products like ChatGPT. But I hope we can use those gripes to voice what we really want in the next path for the assistant, by sharing our thoughts and speaking with our wallet. Do you want a Siri that's better at understanding context, or one that goes further and makes decisions for you? It's a question I'll be dwelling on more as Apple gives us the next peak into the future of iOS on Monday, and perhaps a glimpse of how the next Siri is shaping up. If you're looking for more One More Thing, subscribe to our YouTube page to catch Bridget Carey breaking down the latest Apple news and issues every Friday.

Business Insider
4 hours ago
- Business Insider
AI leaders have a new term for the fact that their models are not always so intelligent
As academics, independent developers, and the biggest tech companies in the world drive us closer to artificial general intelligence — a still hypothetical form of intelligence that matches human capabilities — they've hit some roadblocks. Many emerging models are prone to hallucinating, misinformation, and simple errors. Google CEO Sundar Pichai referred to this phase of AI as AJI, or "artificial jagged intelligence," on a recent episode of Lex Fridman's podcast. "I don't know who used it first, maybe Karpathy did," Pichai said, referring to deep learning and computer vision specialist Andrej Karpathy, who cofounded OpenAI before leaving last year. AJI is a bit of a metaphor for the trajectory of AI development — jagged, marked at once by sparks of genius and basic mistakes. In a 2024 X post titled "Jagged Intelligence," Karpathy described the term as a "word I came up with to describe the (strange, unintuitive) fact that state of the art LLMs can both perform extremely impressive tasks (e.g. solve complex math problems) while simultaneously struggle with some very dumb problems." He then posted examples of state of the art large language models failing to understand that 9.9 is bigger than 9.11, making "non-sensical decisions" in a game of tic-tac-toe, and struggling to count. The issue is that unlike humans, "where a lot of knowledge and problem-solving capabilities are all highly correlated and improve linearly all together, from birth to adulthood," the jagged edges of AI are not always clear or predictable, Karpathy said. Pichai echoed the idea. "You see what they can do and then you can trivially find they make numerical errors or counting R's in strawberry or something, which seems to trip up most models," Pichai said. "I feel like we are in the AJI phase where dramatic progress, some things don't work well, but overall, you're seeing lots of progress." In 2010, when Google DeepMind launched, its team would talk about a 20-year timeline for AGI, Pichai said. Google subsequently acquired DeepMind in 2014. Pichai thinks it'll take a little longer than that, but by 2030, "I would stress it doesn't matter what that definition is because you will have mind-blowing progress on many dimensions." By then the world will also need a clear system for labeling AI-generated content to "distinguish reality," he said. "Progress" is a vague term, but Pichai has spoken at length about the benefits we'll see from AI development. At the UN's Summit of the Future in September 2024, he outlined four specific ways that AI would advance humanity — improving access to knowledge in native languages, accelerating scientific discovery, mitigating climate disaster, and contributing to economic progress.
Yahoo
7 hours ago
- Yahoo
Why Smart People Make Dumb Money Decisions, According to Humphrey Yang
According to the TIAA Institute-GFLEC Personal Finance Index, about half of American adults lack financial literacy, and even more fall short when it comes to decisions regarding risk. According to financial YouTuber Humphrey Yang, being smart can put you at a greater risk of making poor choices. Read More: Find Out: In a recent YouTube video, Yang covered three biases that often trap smart people into making money decisions that leave them poor. But even if you consider yourself intelligent and financially literate, that doesn't guarantee you'll do the best things with your money. Here are the signs to watch out for if you're making dumb money decisions, and tips to avoid falling for them. Authority bias is when you believe what a person — like a CEO, celebrity or financial advisor — says because of their high influence or position. This can get you in trouble since their advice might be completely wrong or not based on the reality of your situation. Yang gave the example of quantum computing stock prices. In December 2024, a Google Willow announcement led many investors to buy these stocks, which boosted their prices. But in January 2025, Nvidia's CEO said the tech had many years to go, and stock prices fell a lot. 'The truth is that many people probably didn't do any due diligence when it came to these stocks, and they probably bought them on a speculative future after the Willow announcement, and then they sold them on a whim after a negative comment,' Yang said. To protect yourself from this bias, don't rely solely on what a single person says to do with your money. Yang said you should also forget whatever is special about that person to improve your objectivity, see what other people say differently about the topic, and trust your instincts. Discover More: If you often look only for information that aligns with your beliefs about money and brush off anything that says differently, you've fallen for confirmation bias. Besides leading to bad money moves, this bias can make you an easier person to scam, according to the Ohio Attorney General. Yang explained, 'It's especially dangerous for those that are super logical because if you're a super methodical thinker, you can actually build a logical sounding argument to defend your pre-existing opinion.' He gave an example of how this can play out with tech stocks. If you favor those stocks, you might watch for positive news reports, listen to influencers who are fans of tech, and focus on friends who profited big. You might not consider any bad earnings projections or the investors who went broke. According to Yang, asking 'why' several times helps avoid bad decisions due to confirmation bias. This lets you dig into your motivation and reasoning for making the money move. He also suggested writing down the decisions you make so you can later look back on why you did certain things and what you expected. 'This is arguably the most dangerous cognitive bias for smart people, and that's basically when people overestimate their knowledge, abilities and their predictions,' said Yang. Overconfidence bias can cause you to not consider risks since you mistakenly think you have an advantage with money over other people, and that could even be due to expertise in an unrelated area. Yang explained that this mistake played a role in various financial crises over the last few decades. Being overconfident might also lead you to not diversify your money enough and risk major losses. Yang gave examples of copying Warren Buffett's portfolio with limited investment choices or investing substantially in your own employer's stock due to familiarity. To avoid letting overconfidence damage your finances, consider that some successes might have come from pure luck rather than a wise choice you made. Yang said you should also regularly compare your predictions to reality and stick to simple investing strategies, like using index funds instead of betting on the next big individual stock. More From GOBankingRates 3 Luxury SUVs That Will Have Massive Price Drops in Summer 2025 These Cars May Seem Expensive, but They Rarely Need Repairs Clever Ways To Save Money That Actually Work in 2025 This article originally appeared on Why Smart People Make Dumb Money Decisions, According to Humphrey Yang