
The AI Hype Trap: Why Most CEOs Struggle To Unlock Real Business Value
Diganta Sengupta is a seasoned technology leader with deep expertise in artificial intelligence, Gen AI, Cloud computing, and blockchain.
While collaborating with clients on cutting-edge AI initiatives, I've had a front-row seat to the rapidly evolving landscape of generative AI (GenAI). There's no doubt that it's a transformative force, and the excitement is palpable.
Leaders see GenAI as a powerful enabler of innovation, efficiency and even cultural change within their organizations. But beneath the surface of this enthusiasm, a more sobering reality has started to emerge.
I observed leadership become enthusiastic about leveraging AI to unlock insights from massive operational datasets, but the reality quickly became evident. Despite deploying advanced models, the organization lacked the foundational elements for scalable impact.
In other words, data was siloed, inconsistent and often not AI-ready. Teams were stretched thin across too many pilot projects without clear alignment to business workflows. Flashy prototypes drew attention but failed to deliver lasting value without reengineering the underlying processes.
This mirrors a broader trend. Seventy percent of CEOs fear that flawed AI strategies could lead to their removal, while 54% fear that competitors may already have more advanced AI implementations.
AI systems learn from historical data. If that data encodes human biases against certain demographics, regions or business units, the AI will reproduce and even amplify those biases.
While developing a prototype using certain datasets for a utility company, for example, I grappled with significant challenges around bias and fairness. These issues persisted despite the presence of seemingly robust governance frameworks. As we trained our AI models on historical operational and customer data, I noticed embedded biases tied to region, demographics and internal processes. These biases not only surfaced in the model outputs but were, in some cases, amplified.
My two cents: CEOs must invest in bias-detection tools, diverse development teams and transparency mechanisms long before deploying AI at scale. Without these guardrails, AI initiatives stall as risk-averse stakeholders balk at unverified "black-box" systems.
In another project integrating a large language model (LLM)-powered chatbot with an enterprise ERP system, I encountered AI hallucinations as the model confidently generated inaccurate and misleading information about customer orders. Despite rigorous prompt engineering and system tuning, we noticed that the LLM occasionally fabricated responses about inventory levels or order status.
This experience echoed findings from a 2024 Boston Consulting Group survey, which revealed that while 75% of executives ranked AI among their top priorities, only 25% reported realizing substantial benefits from their AI initiatives.
Tackle hallucinations with robust validation pipelines, keep human-in-the-loop review for critical outputs and ongoing monitoring of model performance.
This is where the challenge becomes even more complex. In many of my AI pilots in the oil and gas sector, I've repeatedly seen issues like inconsistent formats, missing metadata and a lack of standardized governance across departments severely impact model performance. Despite having large volumes of rich data, much of it couldn't be used without extensive manual cleanup. Efforts to unify data governance were often sidelined in favor of launching high-profile AI initiatives.
A Harvard Business Review Analytic Services survey similarly found that most companies' data is largely not ready for enterprise-wide AI, citing poor data quality as a key barrier. Without strong cross-functional data stewardship and quality assurance, even the most advanced AI models fall short.
Before spending on fancy models, CEOs must champion cross-functional data governance, setting up practices on creating common taxonomies, automated data-quality checks and centralized platforms. Only then can AI be relied upon to deliver accurate, actionable insights.
Working on the previously mentioned utility AI project also brought light to another critical and often underestimated concern—security and governance challenges that surround enterprise AI deployments.
As we integrated sensitive operational and customer data into AI workflows, it became clear how vulnerable these systems can become without rigorous controls. Inadequate access management, insufficient encryption and lack of monitoring can create openings for potential ransomware attacks and unauthorized data exposure.
In one survey, 35% of respondents cited mistakes or errors with real-world consequences and 34% pointed to not achieving expected value as top barriers. Both are rooted in security vulnerabilities and governance shortcomings.
CEOs must elevate AI risk management to the same level as financial or operational risk. This includes rigorous model-risk frameworks, data-privacy impact assessments and alignment with evolving regulations such as the EU's AI Act.
To harness the full potential of AI, I recommend applying practical, accountable strategies that organizations can adopt to drive real, scalable impact.
• Establish cross-functional data governance. Form a governance council with IT, compliance and operations to ensure data ownership, accountability and consistent standards.
• Implement data quality controls. Deploy automated checks for outliers, schema validation and data freshness to improve input reliability and mitigate bias.
• Address LLM hallucinations with RAG. Use retrieval-augmented generation (RAG), prompt chaining and fallback mechanisms to reduce hallucinations.
• Align AI projects with business goals. Prioritize initiatives tied directly to key KPIs (for example, safety, cost reduction, etc.), which can improve adoption and leadership support.
• Pivot away from noncritical use cases. Reallocate resources from low-impact projects to high-impact workflows like downtime alerts for field engineers.
• Focus on responsible AI deployment. Emphasize transparency, accountability and strategic value delivery to build trust and ensure scalability.
CEOs who view AI adoption as a multidimensional transformation rather than a plug-and-play technology will be the ones ready to move beyond the hype and truly harness the AI power. The future of competitive advantage lies not just in having AI, but in embedding it thoughtfully and responsibly into the fabric of the enterprise. This will help transform AI from a conceptual promise to a tangible asset and help drive innovation and growth for the organizations.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
25 minutes ago
- Yahoo
Snowcap Compute Launches with $23 Million Led by Playground Global to Power the Next Era of AI and Quantum Compute
Superconducting startup aims to expand the limits of silicon PALO ALTO, Calif., June 23, 2025--(BUSINESS WIRE)--Snowcap Compute, a startup building the first commercially viable superconducting compute platform, today launched with $23 million in seed funding led by Playground Global. Engineered for extreme performance and energy efficiency, Snowcap's architecture enables a new class of data centers optimized for AI, quantum, and high-performance computing (HPC). The platform delivers the performance and efficiency needed to power advanced AI inference and training, as well as HPC and quantum-classical hybrid workloads. "We're building compute systems for the edge of what's physically possible," said Michael Lafferty, CEO, Snowcap. "Superconducting logic lets us push beyond the limits of existing CMOS technology, achieving orders-of-magnitude gains in processing speed and efficiency. That performance is essential for the future of AI and quantum computing." Snowcap is led by an exceptional team of silicon industry veterans with extensive scientific, technical and operational experience. In addition to Lafferty, whose tenure as director of Cadence's "More than Moore" engineering group focused on pioneering superconducting and quantum technologies, the founding team includes Chief Science Officer Anna Herr, Ph.D., and Chief Technology Officer Quentin Herr, Ph.D., who are globally recognized as the foremost researchers in practical superconducting computers with key roles at Northrop Grumman and imec. Joining Mike, Anna, and Quentin is a roster of deep tech Silicon Valley veterans, including advisors Brian Kelleher, former SVP of GPU engineering at NVIDIA, and Phil Carmack, former VP of silicon engineering at Google. "Snowcap represents a rare opportunity to realize dramatic performance and power efficiency gains," said Pat Gelsinger, General Partner at Playground Global and Chair of the Board at Snowcap. "Reimagining a post-CMOS world from the ground up with the most capable and experienced team in superconducting technology is exactly the kind of breakthrough that Playground was built to enable. The implications for AI, quantum and HPC are both thrilling and profound." Built on decades of research and development, Snowcap's platform is engineered to support next-generation quantum and low-temperature compute systems, enabling exceptional speed, efficiency and low latency. Snowcap has also solved the key engineering challenges that have prevented superconducting technology from reaching broad commercialization in the past, including scaling, fab compatibility, EDA challenges, and system architecture issues. Joining Playground in this seed round are Cambium Capital, which specializes in compute and deep tech investments, and Vsquared Ventures, a European firm with a strong track record in international semiconductor investing. About Snowcap Snowcap is building the first commercially viable superconducting compute platform — designed to deliver dramatic gains in speed and energy efficiency for AI, quantum, and high-performance computing. Founded by a team of experts in superconducting logic, digital architectures, and systems engineering, Snowcap is rethinking classical compute for the age of cryo and quantum infrastructure. Learn more at About Playground Global Playground Global is a deep tech venture capital firm with $1.2 billion under management, backing early-stage startups solving foundational challenges in next-generation compute, automation, energy transition, and engineered biology. Founded in 2015 and based in Palo Alto, Playground partners closely with technical and scientific founders to turn breakthrough ideas into lasting companies. Portfolio highlights include PsiQuantum, MosaicML (acquired by Databricks), d-Matrix, Agility Robotics, Ideon, Ultima Genomics, and Strand Therapeutics. Learn more at View source version on Contacts Steve Smithssmith@


Fast Company
34 minutes ago
- Fast Company
MrBeast used AI to create YouTube thumbnails. People weren't pleased
YouTube star Jimmy Donaldson—aka MrBeast—is the face of the online video-sharing platform. He tops the platform's most-subscribed list, with more than 400 million people following his exploits. Online video has made Donaldson rich, with his business worth an estimated $1 billion. However, the megastar is now embroiled in controversy following the launch of a new AI-powered thumbnail generator. The tool, developed with the analytics platform Viewstats, was promoted in now-deleted videos by Donaldson as a way for creators to easily generate eye-catching thumbnails—including the ability to swap faces and styles with existing popular videos. The product was condemned by fellow YouTubers and artists, who accused MrBeast of facilitating the theft of their creative work and brand identity. Prominent creators like Jacksepticeye (i.e. Seán McLoughlin) publicly criticized the tool after his own logo and thumbnail style were used in promotional materials without his consent, calling the practice deeply unethical and harmful to the creative community. 'I hate what this platform is turning into. Fuck AI,' Jacksepticeye posted on X. (Neither McLoughlin nor Donaldson responded to Fast Company 's request for comment.) Donaldson quickly acknowledged the concerns, pledging to make changes to the tool. 'I'll build this more in a way to be inspiration for artists/a tool they use and not replace them,' he posted on X. Still, the incident has gained momentum, provoking angry responses and heated debate about the endorsement of such an AI product. For example, another YouTuber, Eric Pointcrow, said of Donaldson: 'What a piece of work.' The mini-drama has riled the YouTube community in a way few other issues have, touching on a common occurrence in the space: the copying of video thumbnails. Why? 'I think there are several things going on here,' says Jess Maddox, associate professor at the University of Alabama, who studies platform dynamics on sites like YouTube. Primarily, Maddox believes that underlying the controversy is 'some good old-fashioned YouTube drama.' The platform often responds as a mob to things it deems offensive, so it's unsurprising that this incident has triggered so much anger. 'YouTube pioneered online pile-on culture, in which everyone wants a piece of someone else's name, image, or likeness,' says Maddox. 'But it's actually quite hard to go after MrBeast, who's one of the biggest and most successful creators. He's almost too big to fail, or ride his coattails.' Beyond that, Maddox points out that the technology—and the broader fear of automation —is also driving the intensity of the response. 'AI in the creator economy is incredibly controversial right now,' says Maddox. 'Many do view it as theft, and other creators view not using it as a badge of honor—that they can say with pride they either do all the work themselves or pay their team fairly to do so.' Donaldson's decision to launch the AI product also came just after YouTube admitted that it used a subset of the 20 billion videos on its platform to train Google's Veo 3 video generation AI model—a fact that may have further amplified the backlash. Yet a recent small survey of U.K. YouTube creators suggests that up to four in five creators are already using AI themselves, saving nearly eight hours of work each week. 'What's caused this backlash isn't just the tool, it's what it represents,' agrees Dom Smales, cofounder of GloMotion Studios, a digital entertainment studio and longstanding voice in the YouTube space. 'When the most powerful creator on the platform automates creativity using other creators' work, it hits a nerve. It further exposes the growing gap between mega-creators and everyone else, which has to be handled carefully as this is a community above everything else.' This combination of factors helps explain why the criticism has been so strong and so sustained. 'MrBeast clearly has enough money to pay for this work, so the fact that he isn't doesn't paint him in the most positive light,' says Maddox. The idea that such AI systems might worsen existing problems is also top of mind. 'If the biggest YouTube creator out there is using AI, I think many creators are nervous this will unfairly exacerbate the divide between big creators and mega-creators—never mind the divide between those and micro- and nano-creators,' Maddox says. 'AI is a labor issue, and it risks making an already unequal creator economy even more unequal.' Yet Smales cautions that people shouldn't be so quick to vilify AI—so long as it's used responsibly. 'AI is here to stay and can be a superb tool to level creators up and allow further democratization of the creator economy,' he says. 'I'm building businesses that use it, but I believe it has to be developed with creators, not just deployed on them.'
Yahoo
35 minutes ago
- Yahoo
UBS Ups Price Target on Yum China (YUMC) to $59, Sees Room for Aggressive Expansion
UBS raised its price target on Yum China Holdings (NYSE: YUMC) from $57.26 to $59.00 a few days earlier, while maintaining its Buy rating on the stock. The upward revision implies increasing confidence in Yum China's long-term expansion prospects, particularly through an increasingly franchise-driven model. According to the firm's analysis, supported by UBS Evidence Lab data tracking restaurant locations across China, Yum China Holdings (NYSE: YUMC) is positioned to exceed its current goal of 20,000 stores by 2026. UBS now sees the company potentially reaching 30,000 locations by 2030. A chef holding up a newly created dish, showcasing the creativity of the restaurant's menu offerings. A key component of this expansion is the shift toward franchising. UBS projects that by 2030, franchise stores will make up approximately 30% of the company's total, a sharp rise from 2024 levels. This transition is expected to unlock considerable capital efficiency benefits, which UBS believes are not fully reflected in current market expectations. The firm also forecasts that this expansion model will support compound annual cash flow growth of 12% from 2026 through 2030. Such performance, in UBS's view, should enable Yum China to deliver consistently strong shareholder returns over the latter half of the decade -- a key factor behind the decision to reaffirm the Buy rating. Previously we shared . While we acknowledge the potential of YUMC as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the best short-term AI stock. READ NEXT: The Best and Worst Dow Stocks for the Next 12 Months and 10 Unstoppable Stocks That Could Double Your Money. Disclosure: None. 擷取數據時發生錯誤 登入存取你的投資組合 擷取數據時發生錯誤 擷取數據時發生錯誤 擷取數據時發生錯誤 擷取數據時發生錯誤