
Klarna Saved $2 Million After Cutting Ties With Salesforce
Klarna Group Plc Chief Executive Officer Sebastian Siemiatkowski said his company saved around $2 million after it severed ties with Salesforce Inc. in favor of data tools it built using artificial intelligence.
Previously, data on Klarna's relationships with merchants was spread over multiple platforms owned by Salesforce as well as in emails, calendars, and cloud documents, Siemiatkowski said at the SXSW London conference. Now the fintech giant is in the process of consolidating the data in one place and will use AI to make better sense of it.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
12 hours ago
- Forbes
AI Safety: Beyond AI Hype To Hybrid Intelligence
Autonomous electric cars with artificial intelligence self driving on metropolis road, 3d rendering The artificial intelligence revolution has reached a critical inflection point. While CEOs rush to deploy AI agents and boast about automation gains, a sobering reality check is emerging from boardrooms worldwide: ChatGPT 4o has 61% hallucinations according to simple QA developed by OpenAI, and even the most advanced AI systems fail basic reliability tests with alarming frequency. In a recent OpEd Dario Amodei, Anthropic's CEO, called for regulating AI arguing that voluntary safety measures are insufficient. Meanwhile, companies like Klarna — once poster children for AI-first customer service — are quietly reversing course on their AI agent-only approach, and rehiring human representatives. These aren't isolated incidents; they're the cusp of the iceberg signaling a fundamental misalignment between AI hype and AI reality. Today's AI safety landscape resembles a high-stakes experiment conducted without a safety net. Three competing governance models have emerged: the EU's risk-based regulatory approach, the US's innovation-first decentralized framework, and China's state-led centralized model. Yet none adequately addresses the core challenge facing business leaders: how to harness AI's transformative potential while managing its probabilistic unpredictability. The stakes couldn't be higher. Four out of five finance chiefs consider AI "mission-critical," while 71% of technology leaders don't trust their organizations to manage future AI risks effectively. This paradox — simultaneous dependence and distrust — creates a dangerous cognitive dissonance in corporate decision-making. AI hallucinations remain a persistent and worsening challenge in 2025, where artificial intelligence systems confidently generate false or misleading information that appears credible but lacks factual basis. Recent data reveals the scale of this problem: in just the first quarter of 2025, close to 13,000 AI-generated articles were removed from online platforms due to hallucinated content, while OpenAI's latest reasoning systems show hallucination rates reaching 33% for their o3 model and a staggering 48% for o4-mini when answering questions about public figures 48% error rate. The legal sector has been particularly affected, with more than 30 instances documented in May 2025 of lawyers using evidence that featured AI hallucinations. These fabrications span across domains, from journalism where ChatGPT falsely attributed 76% of quotes from popular journalism sites to healthcare where AI models might misdiagnose medical conditions. The phenomenon has become so problematic that 39% of AI-powered customer service bots were pulled back or reworked due to hallucination-related errors highlighting the urgent need for better verification systems and user awareness when interacting with AI-generated content. The future requires a more nuanced and holistic approach than the traditional either-or perspective. Forward-thinking organizations are abandoning the binary choice between human-only and AI-only approaches. Instead, they're embracing hybrid intelligence — deliberately designed human-machine collaboration that leverages each party's strengths while compensating for their respective weaknesses. Mixus, which went public in June 2025, exemplifies this shift. Rather than replacing humans with autonomous agents, their platform creates "colleague-in-the-loop" systems where AI handles routine processing while humans provide verification at critical decision points. This approach acknowledges a fundamental truth that the autonomous AI evangelists ignore: AI without natural intelligence is like building a Porsche and giving it to people without a driver's license. The autonomous vehicle industry learned this lesson the hard way. After years of promising fully self-driving cars, manufacturers now integrate human oversight into every system. The most successful deployments combine AI's computational power with human judgment, creating resilient systems that gracefully handle edge cases and unexpected scenarios. LawZero is another initiative in this direction, which seeks to promote scientist AI as a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. The underpinning belief is that AI should be cultivated as a global public good — developed and used safely towards human flourishing. It should be prosocial. While media attention focuses on AI hallucinations, business leaders face more immediate threats. Agency decay — the gradual erosion of human decision-making capabilities — poses a systemic risk as employees become overly dependent on AI recommendations. Mass persuasion capabilities enable sophisticated social engineering attacks. Market concentration in AI infrastructure creates single points of failure that could cripple entire industries. 47% of business leaders consider people using AI without proper oversight as one of the biggest fears in deploying AI in their organization. This fear is well-founded. Organizations implementing AI without proper governance frameworks risk not just operational failures, but legal liability, regulatory scrutiny, and reputational damage. Double literacy — investing in both human literacy (a holistic understanding of self and society) and algorithmic literacy — emerges as our most practical defense against AI-related risks. While waiting for coherent regulatory frameworks, organizations must build internal capabilities that enable safe AI deployment. Human literacy encompasses emotional intelligence, critical thinking, and ethical reasoning — uniquely human capabilities that become more valuable, not less, in an AI-augmented world. Algorithmic literacy involves understanding how AI systems work, their limitations, and appropriate use cases. Together, these competencies create the foundation for responsible AI adoption. In healthcare, hybrid systems have begun to revolutionize patient care by enabling practitioners to spend more time in direct patient care while AI handles routine tasks, improving care outcomes and reducing burnout. Some leaders in the business world are also embracing the hybrid paradigm, with companies incorporating AI agents as coworkers gaining competitive advantages in productivity, innovation, and cost efficiency. Practical Implementation: The A-Frame Approach If you are a business reader and leader, you can start building AI safety capabilities in-house, today using the A-Frame methodology – 4 interconnected practices that create accountability without stifling innovation: Awareness requires mapping both AI capabilities and failure modes across technical, social, and legal dimensions. You cannot manage what you don't understand. This means conducting thorough risk assessments, stress-testing systems before deployment, and maintaining current knowledge of AI limitations. Appreciation involves recognizing that AI accountability operates across multiple levels simultaneously. Individual users, organizational policies, regulatory requirements, and global standards all influence outcomes. Effective AI governance requires coordinated action across all these levels, not isolated interventions. Acceptance means acknowledging that zero-failure AI systems are mythical. Instead of pursuing impossible perfection, organizations should design for resilience — systems that degrade gracefully under stress and recover quickly from failures. This includes maintaining human oversight capabilities, establishing clear escalation procedures, and planning for AI system downtime. Accountability demands clear ownership structures defined before deployment, not after failure. This means assigning specific individuals responsibility for AI outcomes, establishing measurable performance indicators, and creating transparent decision-making processes that can withstand regulatory scrutiny. The AI safety challenge isn't primarily technical — it's organizational and cultural. Companies that successfully navigate this transition will combine ambitious AI adoption with disciplined safety practices. They'll invest in double literacy programs, design hybrid intelligence systems, and implement the A-Frame methodology as standard practice. The alternative — rushing headlong into AI deployment without adequate safeguards — risks not just individual corporate failure, but systemic damage to AI's long-term potential. As the autonomous vehicle industry learned, premature promises of full automation can trigger public backlash that delays beneficial innovation by years or decades. Business leaders face a choice: they can wait for regulators to impose AI safety requirements from above, or they can proactively build safety capabilities that become competitive advantages. Organizations that choose the latter approach — investing in hybrid intelligence and double literacy today — will be best positioned to thrive in an AI-integrated future while avoiding the pitfalls that inevitably accompany revolutionary technology transitions. The future belongs not to companies that achieve perfect AI automation, but to those that master the art of human-AI collaboration. In a world of probabilistic machines, our most valuable asset remains deterministic human judgment — enhanced, not replaced, by artificial intelligence.
Yahoo
13 hours ago
- Yahoo
Humans provide necessary 'checks and balances' for AI, says Lattice CEO
Of all the words in the dictionary, Sarah Franklin says 'balance' is perhaps her favorite -- especially when it comes to companies embracing AI. Franklin leads the Jack Altman-founded employee performance software company Lattice, which is now worth $3 billion. Both onstage at SXSW London and in conversation with TechCrunch, she spoke a lot about balance — the opportunities in finding it, and the risks of not having it during this AI revolution. 'We put people first,' Franklin told TechCrunch, referring to Lattice, which has started to adopt more AI and automation features. Although some companies are touting AI as a way to replace massive numbers of workers, some tech leaders are speaking more openly about the importance of striking a balance at their companies: retaining human employees while augmenting them with AI assistants and 'agents.' At SXSW London, Franklin said that looking to fully replace human workers might seem like a good idea in the short term for cost-saving reasons, but such a move might not actually be attractive to customers. 'It's important to ask yourself, 'Are you building for the success of the AI first [or are] you building for the success of the people and your customers first?' she said, adding that trust is the most important currency any founder or startup company has, and that building trust with consumers is paramount. 'It's good to have efficiency, but you don't want to trade out trust.' Franklin also stressed the importance of transparency, accountability, and responsibility when it comes to AI. Leaders need to be transparent with employees about what the AI is doing, the AI must be narrowly applied to a particular goal so people understand how it works, and humans must ultimately be held accountable for what the AI impacts. 'Otherwise, we are then in service of the AI versus the AI being in service of us,' Franklin continued. In an interview with TechCrunch after her SXSW appearance, Franklin said Lattice has built an AI HR agent that gives proactive insights and assists employees in one-on-one meetings. The company also has a platform where Lattice clients can create their own custom agents for their businesses. Franklin was adamant that humans must have oversight of any AI technology implemented by a company. 'It's a way to just have the regular checks and balances that we're used to in our workforce,' she told TechCrunch. She thinks the victors in this AI moment in history will be the ones who learn how to put people first. According to Franklin, it's one of the most important guardrails that a company can have on AI. 'We all have a responsibility to make sure that we're doing this for the people of society,' Franklin said. 'Human connection cannot be replaced, and the winners are going to be the companies that understand that.' Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
13 hours ago
- Yahoo
Retailers turn to BNPL apps to ease cost-of-living strain
As rising prices squeeze household budgets on both sides of the Atlantic, major retailers in the UK and US are turning to Buy Now Pay Later (BNPL) services like Klarna, Affirm, PayPal and Afterpay to offer more flexible payment options. These services allow shoppers to split purchases into smaller instalments, often interest-free, appealing to consumers looking to manage spending without resorting to credit cards. In the UK, high street brands such as John Lewis and ASDA have integrated BNPL at checkout. In the US, companies like Walmart and Amazon have partnered with BNPL providers to help customers spread the cost of everyday purchases and big-ticket items. Retailers on both sides of the Atlantic have reported increased online conversion rates and larger order values since adopting BNPL. In the UK, John Lewis now offers Klarna as a payment method on its website, allowing customers to pay in three instalments. The retailer says this has helped boost sales of furniture and homeware, categories where customers typically spend more. In the US, Affirm is widely used by major retailers including Walmart and Peloton. Walmart enables BNPL at checkout through Affirm for purchases over $144, including electronics, home goods and sports equipment. By offering payment flexibility, the retailer has made higher-value purchases more accessible to cash-strapped consumers. Retailers receive full payment upfront from BNPL providers, while the customer repays the loan in instalments. This setup protects businesses from payment default while offering customers a way to manage costs over time. BNPL has become especially popular among younger shoppers, many of whom prefer avoiding traditional credit. In the UK, fashion brands like ASOS, H&M and JD Sports report strong uptake among Gen Z customers, who use Klarna and Clearpay (the UK version of Afterpay) to budget purchases. Similarly, in the US, Klarna and Afterpay have grown rapidly among millennial and Gen Z users, who use these services for everything from clothing to tech. Amazon introduced BNPL through Affirm in 2021, offering interest-free payments on a range of goods, from laptops to kitchen appliances. Retailers benefit from increased reach into this demographic, often using BNPL firms' marketing platforms to target shoppers directly. However, customer approval rates and loan eligibility vary, raising concerns about transparency and potential exclusion of those with weaker credit histories. While BNPL offers convenience and budgeting support, regulators and consumer advocates in both countries have warned of the risks. In the UK, the Financial Conduct Authority is planning stricter oversight of the sector after concerns about debt accumulation and lack of affordability checks. In the US, the Consumer Financial Protection Bureau has also launched investigations into BNPL providers, citing the potential for consumers to take on multiple loans across platforms without a clear understanding of repayment obligations. Retailers using BNPL must now navigate a shifting regulatory landscape. They are expected to ensure transparency around terms and conditions, and to help customers understand the consequences of missed payments. As inflation continues to impact household finances, BNPL remains a double-edged sword: a useful tool for short-term flexibility, but one that may pose longer-term financial risks if not used responsibly. "Retailers turn to BNPL apps to ease cost-of-living strain" was originally created and published by Retail Insight Network, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.