
TikTok, Tariffs and Luxury's Fake News Problem
Listen to and follow 'The Debrief': Apple Podcasts | Spotify | Overcast Background:
A strange new genre of TikTok videos is challenging long-held assumptions about how luxury products are made. Often shot in anonymous Chinese factories, these videos claim that the so-called 'superfakes' flooding the market are indistinguishable from, and sometimes made in the same factories as, high-end bags from the likes of Chanel or Louis Vuitton.
While all evidence points to these claims being false, the repetition of these videos has amplified a growing narrative: that luxury pricing is inflated, quality is slipping and production secrets are being exposed. Fuelled further by the US-China tariff dispute and the allure of buying a $10,000 bag for $300, this narrative is resonating with a social media audience increasingly disillusioned with luxury's mystique.
In this episode, BoF's chief sustainability correspondent Sarah Kent joins hosts Sheena Butler-Young and Brian Baskin to break down what's really happening behind the scenes — and why silence might not be a viable strategy for brands much longer. Key Insights: TikTok's 'superfake' narrative may be fiction, but it's feeding real consumer doubt. While only a few viral TikTok videos explicitly claim to produce fakes in the same factories as luxury goods, that idea has travelled widely and taken root. 'It is supremely unlikely that any factory that had a real relationship with any luxury brand would go on TikTok to market superfakes,' Kent notes. Yet the repetition of these claims underscores luxury's ongoing transparency issue. In the absence of accessible facts, falsehoods thrive.
Today's best craftsmanship isn't always in Europe as high-quality manufacturing has shifted globally. 'For instance, if you were making performance footwear or sneakers in particular, China, Cambodia, and Vietnam are probably the best factories you can find in the world to do that,' Kent explains. 'If you want to make a luxury product of that quality, you probably don't want to make that in France or Italy.'
The fake bag narrative is irresistible but damaging to luxury. Even those who know the claims are likely untrue find them hard to shake. 'It's a delicious narrative,' Kent says. One that plays into an existing story of overpricing, declining quality, and aloofness in luxury. Brands have long relied on mythology and mystique. But as Kent notes, that strategy is less effective in a social media age, where misinformation travels fast and reputations can erode overnight.
Consumers are questioning whether luxury is worth the price and Kent says consumer doubt 'isn't going away'. Luxury brands need to explain more clearly why their products carry such high price tags to slow this erosion of trust that has accelerated since the pandemic, as prices rose and quality concerns mounted. 'If brands aren't giving compelling information that explains where their stuff is made and why it's valued in this manner then those questions aren't going to fade,' Kent warns. Additional Resources:
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
17 minutes ago
- Forbes
AI Safety: Beyond AI Hype To Hybrid Intelligence
Autonomous electric cars with artificial intelligence self driving on metropolis road, 3d rendering The artificial intelligence revolution has reached a critical inflection point. While CEOs rush to deploy AI agents and boast about automation gains, a sobering reality check is emerging from boardrooms worldwide: ChatGPT 4o has 61% hallucinations according to simple QA developed by OpenAI, and even the most advanced AI systems fail basic reliability tests with alarming frequency. In a recent OpEd Dario Amodei, Anthropic's CEO, called for regulating AI arguing that voluntary safety measures are insufficient. Meanwhile, companies like Klarna — once poster children for AI-first customer service — are quietly reversing course on their AI agent-only approach, and rehiring human representatives. These aren't isolated incidents; they're the cusp of the iceberg signaling a fundamental misalignment between AI hype and AI reality. Today's AI safety landscape resembles a high-stakes experiment conducted without a safety net. Three competing governance models have emerged: the EU's risk-based regulatory approach, the US's innovation-first decentralized framework, and China's state-led centralized model. Yet none adequately addresses the core challenge facing business leaders: how to harness AI's transformative potential while managing its probabilistic unpredictability. The stakes couldn't be higher. Four out of five finance chiefs consider AI "mission-critical," while 71% of technology leaders don't trust their organizations to manage future AI risks effectively. This paradox — simultaneous dependence and distrust — creates a dangerous cognitive dissonance in corporate decision-making. AI hallucinations remain a persistent and worsening challenge in 2025, where artificial intelligence systems confidently generate false or misleading information that appears credible but lacks factual basis. Recent data reveals the scale of this problem: in just the first quarter of 2025, close to 13,000 AI-generated articles were removed from online platforms due to hallucinated content, while OpenAI's latest reasoning systems show hallucination rates reaching 33% for their o3 model and a staggering 48% for o4-mini when answering questions about public figures 48% error rate. The legal sector has been particularly affected, with more than 30 instances documented in May 2025 of lawyers using evidence that featured AI hallucinations. These fabrications span across domains, from journalism where ChatGPT falsely attributed 76% of quotes from popular journalism sites to healthcare where AI models might misdiagnose medical conditions. The phenomenon has become so problematic that 39% of AI-powered customer service bots were pulled back or reworked due to hallucination-related errors highlighting the urgent need for better verification systems and user awareness when interacting with AI-generated content. The future requires a more nuanced and holistic approach than the traditional either-or perspective. Forward-thinking organizations are abandoning the binary choice between human-only and AI-only approaches. Instead, they're embracing hybrid intelligence — deliberately designed human-machine collaboration that leverages each party's strengths while compensating for their respective weaknesses. Mixus, which went public in June 2025, exemplifies this shift. Rather than replacing humans with autonomous agents, their platform creates "colleague-in-the-loop" systems where AI handles routine processing while humans provide verification at critical decision points. This approach acknowledges a fundamental truth that the autonomous AI evangelists ignore: AI without natural intelligence is like building a Porsche and giving it to people without a driver's license. The autonomous vehicle industry learned this lesson the hard way. After years of promising fully self-driving cars, manufacturers now integrate human oversight into every system. The most successful deployments combine AI's computational power with human judgment, creating resilient systems that gracefully handle edge cases and unexpected scenarios. LawZero is another initiative in this direction, which seeks to promote scientist AI as a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. The underpinning belief is that AI should be cultivated as a global public good — developed and used safely towards human flourishing. It should be prosocial. While media attention focuses on AI hallucinations, business leaders face more immediate threats. Agency decay — the gradual erosion of human decision-making capabilities — poses a systemic risk as employees become overly dependent on AI recommendations. Mass persuasion capabilities enable sophisticated social engineering attacks. Market concentration in AI infrastructure creates single points of failure that could cripple entire industries. 47% of business leaders consider people using AI without proper oversight as one of the biggest fears in deploying AI in their organization. This fear is well-founded. Organizations implementing AI without proper governance frameworks risk not just operational failures, but legal liability, regulatory scrutiny, and reputational damage. Double literacy — investing in both human literacy (a holistic understanding of self and society) and algorithmic literacy — emerges as our most practical defense against AI-related risks. While waiting for coherent regulatory frameworks, organizations must build internal capabilities that enable safe AI deployment. Human literacy encompasses emotional intelligence, critical thinking, and ethical reasoning — uniquely human capabilities that become more valuable, not less, in an AI-augmented world. Algorithmic literacy involves understanding how AI systems work, their limitations, and appropriate use cases. Together, these competencies create the foundation for responsible AI adoption. In healthcare, hybrid systems have begun to revolutionize patient care by enabling practitioners to spend more time in direct patient care while AI handles routine tasks, improving care outcomes and reducing burnout. Some leaders in the business world are also embracing the hybrid paradigm, with companies incorporating AI agents as coworkers gaining competitive advantages in productivity, innovation, and cost efficiency. Practical Implementation: The A-Frame Approach If you are a business reader and leader, you can start building AI safety capabilities in-house, today using the A-Frame methodology – 4 interconnected practices that create accountability without stifling innovation: Awareness requires mapping both AI capabilities and failure modes across technical, social, and legal dimensions. You cannot manage what you don't understand. This means conducting thorough risk assessments, stress-testing systems before deployment, and maintaining current knowledge of AI limitations. Appreciation involves recognizing that AI accountability operates across multiple levels simultaneously. Individual users, organizational policies, regulatory requirements, and global standards all influence outcomes. Effective AI governance requires coordinated action across all these levels, not isolated interventions. Acceptance means acknowledging that zero-failure AI systems are mythical. Instead of pursuing impossible perfection, organizations should design for resilience — systems that degrade gracefully under stress and recover quickly from failures. This includes maintaining human oversight capabilities, establishing clear escalation procedures, and planning for AI system downtime. Accountability demands clear ownership structures defined before deployment, not after failure. This means assigning specific individuals responsibility for AI outcomes, establishing measurable performance indicators, and creating transparent decision-making processes that can withstand regulatory scrutiny. The AI safety challenge isn't primarily technical — it's organizational and cultural. Companies that successfully navigate this transition will combine ambitious AI adoption with disciplined safety practices. They'll invest in double literacy programs, design hybrid intelligence systems, and implement the A-Frame methodology as standard practice. The alternative — rushing headlong into AI deployment without adequate safeguards — risks not just individual corporate failure, but systemic damage to AI's long-term potential. As the autonomous vehicle industry learned, premature promises of full automation can trigger public backlash that delays beneficial innovation by years or decades. Business leaders face a choice: they can wait for regulators to impose AI safety requirements from above, or they can proactively build safety capabilities that become competitive advantages. Organizations that choose the latter approach — investing in hybrid intelligence and double literacy today — will be best positioned to thrive in an AI-integrated future while avoiding the pitfalls that inevitably accompany revolutionary technology transitions. The future belongs not to companies that achieve perfect AI automation, but to those that master the art of human-AI collaboration. In a world of probabilistic machines, our most valuable asset remains deterministic human judgment — enhanced, not replaced, by artificial intelligence.

Epoch Times
25 minutes ago
- Epoch Times
Canada, China Agree to ‘Regularize' High-Level Talks After Carney's First Official Call With Beijing
Canada and China have agreed to 'regularize' high-level talks between the two countries following a call between Prime Minister Mark Carney and Chinese Premier Li Qiang on June 5, the prime minister's office said. 'The leaders exchanged views on bilateral relations, including the importance of engagement, and agreed to regularize channels of communication between Canada and China,' says a During the conversation, Carney and Li also discussed trade between the two countries, and 'committed their governments' to cooperating to address the fentanyl crisis, according to the PMO's statement. It also says Carney raised 'trade irritants' affecting agricultural and food products, referring to tariffs Beijing recently imposed on Canada. The statement also says Carney raised 'other issues,' without providing further details. The PMO confirmed to the media this was the first conversation Carney held with Beijing since becoming prime minister. Speaking further about the call, Carney said on June 6 that the discussion was 'the start of a process of recalibrating the relationship with China.' Related Stories 5/23/2025 5/3/2024 'They are our second-largest trading partner, the second-largest trading partner for Canada,' Carney said. 'We have a number of trade disputes with China. Farmers across this country, fishers across this country are being affected by Chinese tariffs. People across this country have been affected by fentanyl and its precursors. Those are issues I raised directly, and we spoke at length about our concerns there, and have initiated processes, including ministerial-level dialogue on trade and other issues. So there are important issues with China that we need to address.' Saskatchewan Premier Scott Moe and Manitoba Premier Wab Kinew have asked Ottawa to engage Beijing to end its tariffs on canola products. Alberta Premier Danielle Smith has in turn Canada-China Relations Ottawa-Beijing relations, already strained in recent years, have further deteriorated amid renewed trade tensions. China earlier this year Ottawa Beijing's latest measures include a 100 percent tariff on Canadian canola oil, oil cakes, and pea imports, as well as 25 percent levies on Canadian seafood and pork. Beijing says its tariffs on Canada U.S. Ambassador to Canada Pete Hoekstra said recently that the United States wants Canada to align with its policies on China. 'The President has made it very, very clear. The No. 1 challenge to America's security, to its safety and prosperity is China,' he said in an interview with The Globe and Mail. 'We're looking for, for people who will confront the challenges with China with us.' U.S. officials have previously Beijing-Ottawa relations began to deteriorate in 2018, when China Tensions rose further after intelligence leaks were reported by Canadian media outlets starting in late 2022 about extensive interference by Beijing in Canada's democracy. This prompted a public inquiry into the matter, which ultimately identified China as 'the most active perpetrator of foreign interference targeting Canada's democratic institutions,' according to the Foreign Interference Commission's final report published earlier this year. Omid Ghoreishi and Isaac Teo contributed to this report.


Newsweek
25 minutes ago
- Newsweek
Rescue Dog Was So Scared, She Couldn't Leave Crate—Now She's Unrecognizable
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. A rescue dog too afraid to leave her kennel finally turned a corner with her foster family. When TikTok user @kierstenlynnb first rescued a white pit bull named Snow, the dog refused to leave her kennel. Kiersten, the foster mom, shared in the first part of her June 2 video Snow's reluctance to receive any attention. Snow cowered in the back, avoiding eye contact when Kiersten tried reaching her hand out. Snow's nervousness is understandable, after all, as she was abandoned and tied to a pole in Houston, Texas, Kiersten told Newsweek via TikTok. Kiersten received a call about Snow and went to rescue her. From there, she welcomed her into her home with open arms and patience, as Snow learned to trust again. Snow went from not wanting to leave the safety of her crate to understanding she's capable of being loved. Kiersten said it took about three to four weeks for Snow to finally come out of her kennel. Her new spot: the bed. The second part of the clip shows Snow happily lounging in the foster mom's bed. Her tail kept wagging. She went from being afraid of leaving the kennel to being afraid of leaving the comfort of the bed, a night-and-day transformation. The caption on the video reads: "Round of applause for Snows transformation." Screenshots from a June 2 TikTok video of a rescue pit bull afraid to leave her kennel after being brought to a foster home. Screenshots from a June 2 TikTok video of a rescue pit bull afraid to leave her kennel after being brought to a foster home. @kierstenlynnb/TikTok Snow is one of the many rescue dogs Kiersten works with. She runs an independent rescue organization, where Snow is available for adoption. "It's so rewarding," she said. "The transformations are my favorite." Viewer Reactions With over 92,500 views as of Friday, TikTok viewers felt relieved to know that this dog is no longer suffering, and that she's relearning what it means to be loved. People flooded the comment section with their praise for saving her. "How can anyone hurt something so beautiful? Thank you for giving her a loving home," wrote one user, while another echoed: "...thank you for giving her the chance to know how she should be treated." A third person commented: "The 1st video is hurtful. She's avoiding eye contact and looks nervous. Thank you so much for showing her what love and caring hands are." Someone else said: "She's like, 'Please don't take me back to that bad, scary place. Sweet girl. Thank you for saving her." Do you have funny and adorable videos or pictures of your pet you want to share? Send them to life@ with some details about your best friend, and they could appear in our Pet of the Week lineup.