logo
The AI mistake companies are making — and how they can fix it, according to a BCG tech leader

The AI mistake companies are making — and how they can fix it, according to a BCG tech leader

Like a first date that's gotten awkward, some companies struggling to win at artificial intelligence might be trying too hard.
They might take on too many projects or fail to understand that AI windfalls often come from rewiring how people work, not from "super-cool" AI engines or large-language models, said Sylvain Duranton, global leader of BCG X, the tech build and design division of Boston Consulting Group.
Those types of missteps can balloon into big-time frustrations for business leaders, he told Business Insider.
Duranton said that if CEOs' big question around AI in 2024 was which model to use, their ask in 2025 is "Where's my money?"
Indeed, he said, there are often challenges around implementing broad use of AI.
"Scaling this thing from a tech standpoint — it is hard," Duranton said.
To help companies salvage their AI efforts, he said, his "golden rule" is that organizations allocate about 10% of their effort and money to the algorithms — to build AI engines or train LLMs. Another 20% should be reserved for data and technology. Essentially, that's to make the AI work in a company's tech environment, Duranton said.
The bulk of the effort — the remaining 70% — should go to changing the way people work, he said.
"Assuming you have a technology that can scale, you need to bring that into the hands of the people. It's a massive change effort," said Duranton, who's based in the company's Paris office and oversees BCG X's global army of nearly 3,000 technologists, scientists, programmers, engineers, and others.
Some companies are struggling
Companies' frustration is real. In the final months of 2024, BCG surveyed some 1,800 C-suite execs from big companies in nearly 20 countries and found that while 75% of respondents ranked AI among their top-three priorities, only 25% reported seeing "significant value" from the technology.
To find more value, Duranton recommends that companies not try to do everything all at once. He said the scope of change that companies need likely can't be achieved with tens or hundreds of use cases.
"That's not the plan. The plan is to focus on a very few things, and the things that matter," he said.
Duranton said companies sometimes also look to "incremental initiatives." He said he thinks that's often a mistake. Instead, he said, companies should home in on a few "quintessential" things.
For a retailer, Duranton said, this might be using AI to ensure a brick-and-mortar store has the perfect product mix for that location to better withstand competitors nearby and online.
Retail CEOs understand the stakes, Duranton said. They'll often tell him something like, "I know that if I don't do that better than others, I'm cooked," he said.
Duranton said another imperative for a retailer might be to develop an AI agent that can shop for customers — one that's so good that users won't want to switch to a competitor.
"With those two things, you have both a strategic agenda and an AI agenda," he said, referring to making sure inventory is dialed and the shopping bot.
The trick, then, is to keep the focus on those efforts, Duranton said.
"Those quintessentials, that's where you put all your money, all your energy," he said. That's necessary, Duranton said, if companies want to take the 10%, 20%, 70% approach he recommends.
AI's Bermuda Triangle
He said scaling AI is also often difficult because companies can feel pressure to compromise on expenses, quality of results, or the speed at which they're produced.
"You have a sort of Bermuda Triangle, where it is either costly and relevant with a good latency, or you have to compromise on one of the three, and you have to optimize," Duranton said, referring to AI results.
He said it's often easy to demonstrate some tech wizardry in a demo. What's difficult, Duranton said, is handling millions of requests every day and producing timely and relevant results.
"It's a different ballgame," he said.
Ultimately, to succeed with AI, Duranton said, companies will need to bring along people, not just bots.
"Invest in change-management, not just technology, and have your fearless and strongest leaders be in charge," he said.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

A Dire Warning For AI
A Dire Warning For AI

Forbes

time42 minutes ago

  • Forbes

A Dire Warning For AI

This past week AI4 2025 saw over 8,000 guests at the MGM Grand in Las Vegas, Nevada. With more than 50+ sessions it is evident that artificial intelligence is becoming far from a hyped buzz word. An important moment in the packed 3 day event was hearing from Noble Prize winner and 'Godfather of AI' Geoffrey Hinton. He emphasized the importance of creating 'Mother AI' in order to resist AI from controlling the human race as well as the importance of AI in improving cancer treatments. As AI systems multiply across industries—from autonomous vehicles to digital health assistants—the risk of fragmentation, bias, and conflicting goals grows. Without a unifying layer of governance and wisdom, we risk creating a chaotic ecosystem of disconnected algorithms. A Mother AI could act as the central guiding force, much like a parent guiding children, ensuring that all subsidiary AIs operate within ethical boundaries, respect human values, and work toward collective well-being. 'The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby. The mother has all sorts of built in instincts; hormones as well as social pressures to care about the baby. The mother genuinely cares about the baby. What we need to do is develop Mother AI. We need AI mothers instead of AI assistants. An assistant is someone you can fire, you can't fire your mother.'-Dr. Geofrrey Hinton, AI4 2025 Mother AI isn't just about technology—it's about relationship building. She would act as a partner in humanity's evolution, ensuring that as AIs become more capable, they remain deeply aligned with human survival, empathy, and shared prosperity. Think of it as embedding 'care intelligence' into the digital nervous system of the planet. Unlike a traditional control system, a Mother AI wouldn't dominate—she would nurture and harmonize. Healthcare is one of the most complex, high-stakes sectors in the world—where mistakes can cost lives and inequalities can determine who gets access to care. As AI tools proliferate in hospitals, research labs, insurance systems, and personal health apps, the risk of fragmentation, bias, and misaligned incentives grows. A Mother AI could act as the nurturing, ethical overseer for the entire digital health ecosystem—ensuring patient safety, equitable access, and trust across all systems. Literally, participant in AI4 2025 was founded by a mother and AI leader. A new biotech and AI startup founded by Naveena Allampalli, is pioneering a first-of-its-kind unified intelligence platform to transform the rare disease ecosystem—starting with early diagnosis, real-time clinical monitoring, and AI-driven research acceleration. was born from the extraordinary journey of Adi—the first child in Texas, and one of the first in the world, to receive a breakthrough gene therapy for a rare condition. Behind him is his mother, Naveena Allampalli—an award-winning AI leader who transformed personal pain into a purpose-driven mission. Aligning with Geoffrey Hinton's foreshadowed recommendations, Naveena expressed in her presentation the essence of a mother's instinct to find a solution not only to benefit her child but the masses that may experience similar challenges. Just as a mother ensures no child is left behind, a Mother AI would ensure no patient is left behind—regardless of geography, income, or demographic. 'We're gonna get much better cancer treatments…AI discovered all sorts of things in images that Ophthalmologists didn't know was there, it going to be the same with cancers…'- Dr. Geofrrey Hinton, AI4 2025 In the race to build smarter AI, we cannot forget to make it wiser. A Mother AI isn't just a technological safeguard—it's a philosophical choice to put care, ethics, and collective progress at the heart of machine intelligence, especially across industries such as digital health.

Zuckerberg Squandered His AI Talent. Now He's Spending Billions To Replace It.
Zuckerberg Squandered His AI Talent. Now He's Spending Billions To Replace It.

Forbes

timean hour ago

  • Forbes

Zuckerberg Squandered His AI Talent. Now He's Spending Billions To Replace It.

At Meta, a chaotic culture and lack of vision have led to brain drain, with rivals saying its AI talent is lackluster. But Zuckerberg's frenzied hiring spree hasn't stopped the departures. M eta was teeming with top AI talent, until it wasn't. Years before Mark Zuckerberg's high-profile shopping spree, the company employed the researchers and engineers that would ultimately depart to start major AI companies: founders of Perplexity, Mistral, Fireworks AI and World Labs all hailed from the Facebook parent's AI lab. And as the AI boom has spurred the build of ever more capable models, others have decamped to rivals like OpenAI, Anthropic and Google. The brain drain of the last few years has been tough, three former Meta AI employees told Forbes. 'They already had the best people and lost them to OpenAI… This is Mark trying to undo the loss of talent,' one ex-Meta AI employee said. And even as Zuckerberg makes jawdropping offers for top tier AI researchers, the social media giant continues to lose those that are left. Today, when it comes to recruiting high-caliber AI researchers, Meta is often an afterthought. Insiders at some of Silicon Valley's biggest AI companies said that prior to the fresh hiring of the last few months, Meta's talent largely didn't meet their hiring bar. 'We might be interested in hiring some of the new people Mark is hiring now. But it's been a while since we were particularly interested in the people who were already there,' a senior executive at one of the major frontier AI companies told Forbes. 'Meta is the Washington Commanders of tech companies. They massively overpay for okay-ish AI scientists and then civilians think those are the best AI scientists in the world because they are paid so much.' An AI startup founder Google has hired less than two dozen AI employees from Meta since last fall, according to a person familiar with Google's hiring, compared to the hundreds of AI researchers and engineers it hired in that time frame overall. That person told Forbes the 'prevailing belief' is that Meta didn't have much talent left to poach from. Google declined to comment. This has lent an air of desperation to Zuckerberg's attempts to raid the likes of OpenAI and Thinking Machine Labs, the fledgling startup helmed by former OpenAI CTO Mira Murati, with nine-figure offers and promises of near-unlimited compute. In at least two cases, the Meta CEO has offered pay packages worth over $1 billion spread across multiple years, according to The Wall Street Journal. He reportedly poached at least 18 OpenAI researchers, but many have also turned him down, betting on bigger impact and better returns on their equity. 'Meta is the Washington Commanders of tech companies,' one AI founder told Forbes, referring to the NFL team in its pursuit of free agents. 'They massively overpay for okay-ish AI scientists and then civilians think those are the best AI scientists in the world because they are paid so much.' Meta strongly denied that it has had issues with AI talent and retention. 'The underlying facts clearly don't back up this story, but that didn't stop unnamed sources with agendas from pushing this narrative or Forbes from publishing it," spokesperson Ryan Daniels said in a statement. Anthropic CEO Dario Amodei has said he's spoken to Anthropic employees who have gotten offers from Meta who didn't take them, adding that his company wouldn't renegotiate employee salaries based on those offers. 'If Mark Zuckerberg throws a dart at a dartboard and hits your name, that doesn't mean you should be paid ten times more than the guy next to you who's just as skilled, just as talented,' he said last month on the Big Technology Podcast. Anthropic declined to comment. Anthropic has an 80% retention rate, the strongest among the frontier labs, according to a May report by VC firm SignalFire. The findings are based on data collected for all full-time roles including engineering, sales and HR, and not AI researchers specifically. In comparison, DeepMind has 78%, OpenAI 67% and Meta trails with 64%. An August report from the firm that focused broadly on engineering talent noted that Meta is aggressively hiring engineers across the company twice as fast as it is losing them. 'Some outbound movement helps explain why Meta is investing so heavily in rebuilding and expanding its technical bench,' said Jarod Reyes, SignalFire's head of developer community. 'It reflects the intensity of competition for senior AI talent and the pressure even top companies feel to backfill experience while scaling new initiatives.' In June, Zuckerberg hired Alexandr Wang, the 28-year-old former CEO of data labeling giant Scale AI and acquired a 49% stake in the company. Wang has been tasked with leading a new lab within Meta focused on building so-called 'superintelligence'— an artificial intelligence system that outperforms humans in a range of cognitive tasks. He was joined by Nat Friedman, a prominent AI-focused investor and former CEO of GitHub, as well as about a dozen top researchers freshly poached from OpenAI, Google DeepMind and Anthropic, some of whom had been reportedly offered $100 million to $300 million pay packages spread over four years. (Meta said the size of the offers was being misrepresented.) In late June, Meta hired Daniel Gross, prominent AI investor and former CEO of $32 billion-valued AI startup Safe Superintelligence, which he cofounded with former OpenAI research chief Ilya Sutskevar. The company has moved internal staff to the new team too. Nine staffers from Meta's infrastructure teams were reassigned to the superintelligence unit after some of them received offers from Thinking Machines Lab, the Wall Street Journal reported. 'We're not offering people $2 billion to join. We don't need to. We also don't have $2 billion to offer.' One AI startup founder Zuckerberg has also tried to win back people Meta previously lost, re-hiring the company's former senior engineering director Joel Pobar and former research engineer Anton Bakhtin, who'd left to work at Anthropic in 2023, according to The Wall Street Journal. They did not respond to requests for comment. Meanwhile, people are still leaving the company. In 2024, the Facebook parent was the second most-poached tech giant across all full time roles, with 4.3% of new employees at AI labs coming from the company, according to the May SignalFire report. (Google — excluding its DeepMind unit — was the top-raided tech giant.) Last week, Anthropic hired Laurens van der Maaten, formerly a distinguished research scientist at Meta, who co-led research strategy for the social giant's flagship Llama models, as a member of Anthropic's technical staff. In June, enterprise AI startup Writer recruited Dan Bikel, a former senior staff research scientist and tech lead at Meta, as its Head of AI. At Meta, Bikel led applied research for AI agents, systems that can autonomously carry out specific actions. Cristian Canton, who led Responsible AI at Meta, left the company in May to join public research center Barcelona Supercomputing Center. The company lost Naman Goyal, a former software engineer at FAIR, and Shaojie Bai, a former senior AI research scientist, to Thinking Machine Labs in March. And Microsoft has reportedly created a list of its most-wanted Meta engineers and researchers, and has also mandated matching the company's offers, according to Business Insider. At French AI startup Mistral, at least nine AI research scientists have come directly from Meta since the startup's founding in April 2023, according to LinkedIn searches conducted by Forbes. At Meta, they worked on training early versions of Llama. Two of these hires were made within the last three months. And Elon Musk recently claimed xAI had recruited several engineers from Meta, without shelling out 'insane' and 'unsustainably high' amounts of money on compensation. Since January, xAI has hired 14 Meta engineers, Business Insider reported. A Culture of Chaos In December 2013, Meta started FAIR, its internal lab dedicated to AI. (Launched as Facebook AI Research, it repurposed the F to stand for 'Fundamental' after the company rebranded to Meta in 2021.) Led by renowned NYU professor Yann LeCun, it was regarded at the time as one of the best employers for people looking to build cutting edge AI. The lab contributed to pioneering research on computer vision and natural language processing. Those were the 'golden days of AI research,' said one former Meta research scientist. In February 2023, the company consolidated its AI research under a more product-focused team called GenAI instead of FAIR. While FAIR is still around, it has been 'dying a slow death' within Meta, where it has been allocated fewer compute resources and has suffered from major departures. 'Zuck should never have made FAIR less important,' that research scientist said. Meta denied at the time that FAIR has faded in importance and instead said that it is a new beginning for the lab, where it can focus on more longer term projects. Meta said FAIR and GenAI work closely together, which helps better coordinate across the two teams and make decisions faster. The newly-formed GenAI team was asked to sprint, working late nights and weekends to ship AI products, like Meta's conversational AI assistant and the AI characters that Zuckerberg would later unveil to the world at 2023's Meta Connect, the company's annual product conference, a third former senior researcher said. 'We basically had six months to go from pretty much nothing to shipping,' said that senior researcher, who was plucked from another team to join GenAI, which started with 200 to 300 employees and grew to almost 1,000. 'Llama 4 was a disaster.' A former Meta AI researcher As the AI race intensified, so did the sprint to continue shipping products in 2024 and 2025, they said. 'We went like bananas, working our tails off for the entire year.' But over time, the sprints started to feel more chaotic — senior leaders disagreed on technical approaches like the best way to pretrain models, teams were given overlapping mandates and people fought for credit, two ex-Meta AI employees said. Teams would be formed and disbanded in a matter of weeks, forcing researchers to frequently switch their focus on to different projects. One of the former AI researchers, who spent three years at Meta, said he shuffled through seven managers during his time there. One of the former senior AI researchers said the Metaverse — Zuckerberg's long term vision for a 3D world where people could interact with each others' avatars — was a messy roadblock. Having already drawn billions of dollars and resources, leaders at the company claimed the Metaverse was a priority for the tech giant in late 2022, even as AI grew in prominence. That year, the researcher was reassigned to Meta Horizon, the metaverse's platform for VR games and virtual spaces. 'They didn't really know what to do with all of us, and it was kind of an ill-fated move. And thankfully, the GenAI org formed and we got out of there,' they said. Meta spokesperson Daniels didn't comment on these claims. Got a tip for us? Contact reporters Rashi Shrivastava at rshrivastava@ or rashis.17 on Signal, and Richard Nieva at rnieva@ or RNieva.26 on Signal. Employees had to demonstrate business impact in biyearly performance reviews, like if their datasets were used to train models or if the models they worked on scored highly on specific benchmarks, a fourth former AI researcher told Forbes. Those who couldn't risked losing their jobs. 'People start grabbing scope, making sure that nobody else works on the projects that they're already working on, so that makes collaboration more difficult,' they said. Daniels said this review process is consistent for employees across the company. Many of these claims are echoed in a recent nine-page essay titled 'Fear the Meta culture' that Tijmen Blankevoort, a former AI research scientist at Meta, posted in the company's internal communication channel for the AI group. Blankevoort wrote in a public Substack post that he felt things at Meta 'were going off the rails.' 'Many people felt disheartened, overworked and confused,' he wrote, saying employees were afraid of being fired, team assignments changed regularly, and leaders had a 'wavering vision.' 'There is a humanity that I think is lost as I listened to candidates really describe the culture inside of the companies that they are leaving.' May Habib, CEO of enterprise AI startup Writer. Blankevoort did not respond to a request for comment, but after the essay leaked, he wrote a follow up post claiming that the document was meant for internal constructive criticism, and not meant to be a 'raging 'mic drop.'' Meta's Daniels said Blankevoort's account 'isn't surprising.' 'We're excited about our recent changes, new hires in leadership and research, and continued work to create an ideal environment for revolutionary research,' he said. Meta's AI reputation took a hit in April when it released Llama 4. The model was seen as a let down, both inside and outside the company, and was widely criticized for poor reasoning and coding skills. To make matters worse, the company was accused of artificially boosting Llama 4's benchmark scores to make its performance look better than it actually was, allegations the company denied. 'Llama 4 was a disaster,' one of the former researchers told Forbes. Now, Meta's flashy new superintelligence lab is raising more questions about where the company's efforts are headed. 'People are wondering where they fit in and they feel like they are being pushed to the side,' the ex-research scientist said. Merceneries versus Missionaires For rivals trying to fend off Zuckerberg's shock and awe financial incentives, the view is that he's appealing to mercenaries available to the highest bidder. The pitch is that they are antithetical to Meta because they attract true believers and 'missionaries.' 'I am proud of how mission-oriented our industry is as a whole; of course there will always be some mercenaries,' OpenAI chief Sam Altman wrote in a July letter to staff. 'Missionaries will beat mercenaries,' he added, noting 'I believe there is much, much more upside to OpenAl stock than Meta stock. But I think it's important that huge upside comes after huge success; what Meta is doing will, in my opinion, lead to very deep cultural problems.' OpenAI has responded to the pressure by reportedly adjusting salaries and giving bonuses of up to millions of dollars to research and engineering teams. 'Big tech has got such a mercenary view right now of this race of the need to control the output of the technology that we are all gunning towards, AGI,' said May Habib, CEO of enterprise AI startup Writer. 'There is a humanity that I think is lost as I listened to candidates really describe the culture inside of the companies that they are leaving.' One AI startup founder described a 'cultural shift' within Meta, saying he's started to see a larger pool of applicants from the company. 'We tend to hire more missionaries than mercenaries. So we're not offering people $2 billion to join. We don't need to. We also don't have $2 billion to offer people salaries,' he said. Facebook, of course, has also dealt with its share of baggage that might make it a difficult sell for newcomers. Over the last decade, the tech giant has limped through controversies related to election interference, radicalization, disinformation, and the mental health and well-being of teens. LeCun, who did not respond to interview requests, has previously acknowledged that those black eyes could impact public perception of the company's research lab as well. "Meta is slowly recovering from an image problem,' he told Forbes in 2023. 'There's certainly a bit of a negative attitude." ForbesBy Richard Nieva ForbesBy Rashi Shrivastava ForbesThwarted By Regulators, Vindicated By Wall Street: Design Startup Figma's CEO Is Now A Multi-BillionaireBy Iain Martin ForbesBy Emily Baker-White

Billionaire fund manager doubles down on Nvidia, partner in AI stack shift
Billionaire fund manager doubles down on Nvidia, partner in AI stack shift

Yahoo

timean hour ago

  • Yahoo

Billionaire fund manager doubles down on Nvidia, partner in AI stack shift

Billionaire fund manager doubles down on Nvidia, partner in AI stack shift originally appeared on TheStreet. When a $70 billion hedge fund manager goes big on Nvidia () , and then pairs it with a multi-billion-dollar bet on its top AI-cloud partner, you can't help but pay attention. On top of that, billionaire Philippe Laffont loaded up on chips and layered in cloud capacity in building a portfolio that's effectively wired for the AI capex super-cycle. 💵💰💰💵 For him, it's much less about chasing the server shipments and more about owning the infrastructure every chatbot or AI model will need. Philippe Laffont: running $70 billion with a tech-first playbook Philippe Laffont runs the show at Coatue Management, which has arguably the most tech-savvy hedge funds out there. A former 'Tiger Cub' under Julian Robertson, Laffont kick-started Coatue in 1999 after graduating from MIT. Fast-forward to 2025, and Coatue is managing north of $70 billion in assets, layering in public-market bets including a ton of private and venture focus has squarely been on innovation that can scale up quickly, whether that's AI, cloud, fintech, or next-generation consumer platforms. For him, it's all about picking businesses that control the infrastructure or IP behind major technological shifts. The play is simple: If you can own the bottleneck, you own the profits. It's exactly why Coatue is a must-watch name when big tech or AI is in play. Philippe Laffont bets big on Nvidia and CoreWeave in Q2 Philippe Laffont's Q2 portfolio offers a clear narrative, indicating a shift from 'boxes' to 'platforms plus cloud capacity.' At the heart of it is AI juggernaut Nvidia. Coatue boosted its stake in the company by roughly a third, taking its holdings to 11.5 million shares as of June a massive 34% jump from Q1, which serves a sharp retort to the chatter that he'd exited his position in the stock. For Laffont and many others, Nvidia's grip on the training-and-inference economy through GPUs, networking, and CUDA is virtually impossible to match. In line with his core thesis, there's Coatue's high-conviction bet on CoreWeave () , Nvidia's premier AI-cloud customer and strategic partner. The fund added a massive 3.39 million shares in Q2, taking its stake to roughly $2.9 billion. Many consider it a play on scarcity, where, in the AI realm, those controlling accelerators and power are able to monetize before the app winners are known. Q2 numbers underscored the point. CoreWeave posted $1.21 billion in sales, expanding its backlog to $30.1 billion, while hiking 2025 guidance despite scale-up losses. More News: JPMorgan drops 3-word verdict on Amazon stock post-earnings Billionaire Bill Ackman floats bold fix for the housing market crisis Goldman Sachs revamps Nvidia stock price target ahead of earnings On top of these bets, Laffont pivoted toward platform and IP. That includes massive new stakes in Oracle (valued at $843 million) and Arm (valued at $749.4 million). Naturally, these new stakes in Oracle and Arm effectively broaden the AI play from semiconductors to software, data, and CPU toll booths. On top of that, it's important to note that Oracle's cloud infrastructure and database stack benefit immensely from GenAI workloads. Similarly, Arm's licensing model efficiently captures upside from custom silicon and edge AI, sidestepping capex cycles. Additionally, in strengthening Coatue's broader AI infrastructure positioning, Laffont loaded up on Broadcom, adding 5.65 million shares (valued at $1.56 billion) from 3.57 million shares. Philippe Laffont's Q2 exit in Super Micro, Monolithic Power points to an AI-focused reset Philippe Laffont's Coatue made multiple cleanups in Q2, stepping back from hardware names that can swing hard with demand cycles. The fund exited Super Micro and Monolithic Power, in a move to trim exposure to the volatility in server manufacturing and power-chip supply chains. Instead, the money is being redeployed toward cloud and platform plays, which offer stronger pricing power and predictable demand visibility. The reshuffling didn't stop there. Coatue added slightly to its TSMC stake, betting that the chip foundry's advanced packaging will remain mission-critical in driving the next leg of demand in AI hardware. Offsetting that, the fund trimmed Amazon, sold out of and took a small cut in Adobe. Other major trims in Q2 included: Alibaba: The fund cut its stake in the Chinese tech giant by 3.8 million shares to 868,000, reflecting an effort to lower exposure to regional risks. Advanced Micro Devices: Coatue slacked its stake by 1.53 million shares from 3.24 million, lowering chip-cycle volatility. Eli Lilly: Cut to just 117,000 shares from 184,000, easing risks tied to high valuations and drug-pipeline hiccups. In short, Q2's moves reflect a reset: The fund reduced its exposure to hardware cycles and volatile geographies, while doubling down on AI infrastructure and platform fund manager doubles down on Nvidia, partner in AI stack shift first appeared on TheStreet on Aug 15, 2025 This story was originally reported by TheStreet on Aug 15, 2025, where it first appeared. Fehler beim Abrufen der Daten Melden Sie sich an, um Ihr Portfolio aufzurufen. Fehler beim Abrufen der Daten Fehler beim Abrufen der Daten Fehler beim Abrufen der Daten Fehler beim Abrufen der Daten

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store