Three reasons why AI's momentum could stall in 2025
Since 2023, the dominant narrative has been that the AI revolution will drive productivity and economic growth, paving the way for extraordinary technological breakthroughs. PwC, for example, projects that AI will add nearly $16 trillion to global GDP by 2030, a 14 per cent increase. Meanwhile, a study by Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond estimates that generative AI could boost worker productivity by 14 per cent on average and by 34 per cent for new and low-skilled workers.
Recent announcements by Google and OpenAI seem to support this narrative, offering a glimpse into a future that not long ago was confined to science fiction. Google's Willow quantum chip, for example, reportedly completed a benchmark computation, a task that would take today's fastest supercomputers ten septillion years (ten followed by 24 zeros), in under five minutes. Likewise, OpenAI's new o3 model represents a major technological breakthrough, bringing AI closer to the point where it can outperform humans in any cognitive task, a milestone known as 'artificial general intelligence.'
But there are at least three reasons why the AI boom could lose steam in 2025. First, investors are increasingly questioning whether AI-related investments can deliver significant returns, as many companies are struggling to generate enough revenue to offset the skyrocketing costs of developing cutting-edge models. While training OpenAI's GPT-4 cost more than $100 million, training future models will likely cost more than $1 billion, raising concerns about the financial sustainability of these efforts.
To be sure, investors are eager to capitalise on the AI boom, with venture capital firms investing a record $97 billion in US-based AI startups in 2024. But it appears that even industry leaders like OpenAI are burning through cash too quickly to generate meaningful returns, leading investors to worry that much of their capital has been misallocated or wasted. A back-of-the-envelope calculation suggests that a $100 billion investment in AI would require at least $50 billion in revenue to produce an acceptable return on capital, accounting for taxes, capital expenditures, and operating expenses. But the entire sector's annual revenues, according to my sources, total just $12 billion, with OpenAI accounting for roughly $4 billion. In the absence of a 'killer app' for which customers are willing to pay substantial sums, a significant portion of VC investments could end up worthless, triggering a decline in investment and spending.
Second, the enormous amounts of energy required to operate and cool massive data centers could impede AI's rapid growth. By 2026, according to the International Energy Agency, AI data centers will consume 1,000 terawatt-hours of electricity annually, exceeding the United Kingdom's total electricity and gas consumption in 2023. The consultancy Gartner projects that by 2027, 40 per cent of existing data centers will be 'operationally constrained' by limited power availability.
Third, large language models appear to be approaching their limits as companies grapple with mounting challenges like data scarcity and recurring errors. LLMs are primarily trained on data scraped from sources such as news articles, published reports, social media posts, and academic papers. But with a finite supply of high-quality information, finding new datasets or creating synthetic alternatives has become increasingly difficult and costly. Consequently, these models are prone to generating incorrect or fabricated answers ('hallucinations'), and AI companies may soon run out of the fresh data needed to refine them.
Computing power is also approaching its physical limits. In 2021, IBM unveiled a two-nanometer chip, roughly the size of a fingernail, capable of fitting 50 billion transistors and improving performance by 45 per cent compared to its seven-nanometer predecessor. While undeniably impressive, this milestone also raises an important question: Has the industry reached the point of diminishing returns in its quest to make ever-smaller semiconductors?
If these trends persist, the current valuations of publicly traded AI companies may not be sustainable. Notably, private investment is already showing signs of declining. According to the research firm Preqin, VC firms raised $85 billion in the first three quarters of 2024, a sharp drop from the $136 billion raised during the same period in 2023.
The good news is that should today's AI giants start to falter, smaller competitors could seize the opportunity and challenge their dominance. From a market standpoint, such a scenario could foster increased competition and reduce concentration, preventing a repeat of the conditions that allowed the so-called 'Magnificent Seven', Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla, to dominate the US tech industry.
Dambisa Moyo, an international economist, is the author of four New York Times bestselling books, including 'Edge of Chaos: Why Democracy Is Failing to Deliver Economic Growth – and How to Fix It' (Basic Books, 2018). Copyright: Project Syndicate, 2025. www.project-syndicate.org

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Jordan News
13 hours ago
- Jordan News
OpenAI Set to Sell Employee Shares at Massive $500 Billion Valuation - Jordan News
OpenAI Set to Sell Employee Shares at Massive $500 Billion Valuation OpenAI, the maker of ChatGPT, is seeking a staggering $500 billion valuation through a potential sale of employee shares, according to a source familiar with the matter. اضافة اعلان This valuation would mark a significant leap from the company's current $300 billion figure and reflects OpenAI's rapid growth in user base and revenue, as well as the intensifying competition among AI companies to attract top-tier talent. The source, who requested anonymity due to the private nature of the discussions, said the deal — likely to precede an initial public offering (IPO) — would allow current and former employees to sell shares worth several billion dollars. Soaring Revenue and User Growth Driven by its flagship product, ChatGPT, OpenAI has doubled its revenue in the first seven months of this year. The company is now operating at an annual run rate of $12 billion, with expectations to reach $20 billion by year-end, the source added. Backed by Microsoft (traded under ticker MSFT.O), OpenAI's user base has surged to around 700 million weekly active users, up from 400 million in February for its ChatGPT products. Follow-Up to Major Funding Round The share-sale discussions come in the wake of a major funding round announced earlier this year, during which OpenAI aimed to raise $40 billion, led by Japan's SoftBank Group (9984.T). According to the source, the rest of the funding was covered at a $300 billion valuation, while SoftBank has until the end of the year to contribute its $22.5 billion portion of the round. AI Talent War Intensifies Major tech firms are fiercely competing to attract top AI talent with lucrative compensation packages. For instance, Meta Platforms (META.O) is investing billions in Scale AI, reportedly in an attempt to recruit its 28-year-old CEO Alexandr Wang to lead Meta's new superintelligence unit. Unlisted companies such as ByteDance, Databricks, and Ramp have also utilized internal share sales to update their market valuations and reward employees with long-term incentives. Investor Interest and Corporate Restructuring Current OpenAI investors — including Thrive Capital — are reportedly in talks to participate in the employee share sale. Thrive Capital declined to comment. Bloomberg was the first to report the potential sale. OpenAI is also undertaking a major restructuring of its corporate framework, aiming to shift away from its current "capped-profit" model and pave the way for a potential future IPO. — Reuters

Ammon
15 hours ago
- Ammon
Microsoft makes OpenAI's new open model available on Windows
Ammon News - OpenAI released a new free and open GPT model yesterday that can run on a PC, and now Microsoft is making that easy to do for Windows users. The lightweight gpt-oss-20b model is now available on Windows AI Foundry, and will be coming soon to macOS, too. You'll need a PC or laptop with at least 16GB of VRAM, so you'll need one of the top GPUs from Nvidia or the variety of Radeon GPUs with sufficient VRAM. The gpt-oss-20b model is optimized for code execution and tool use, and Microsoft says it's 'perfect for building autonomous assistants or embedding AI into real-world workflows, even in bandwidth-constrained environments.' Microsoft has pre-optimized gpt-oss-20b for local inference, and it hints that support for more devices is coming soon. That could mean we see a more optimized version for Copilot Plus PCs at some point in the future, much like how Microsoft has been adding a variety of local AI models to Windows recently. Microsoft's speedy addition of OpenAI's latest model to the Windows AI Foundry comes as Amazon was equally quick to adopt the new open-weight GPT-OSS models for its cloud services. It's the first time you can run an OpenAI model locally on Windows, but it's also the first time Microsoft's biggest cloud competitor has had access to the latest OpenAI models — adding another dynamic to the complicated OpenAI and Microsoft partnership. The Verge


Al Bawaba
17 hours ago
- Al Bawaba
du partners with Microsoft, Nokia, Khalifa University, and ITU to Launch Region's First Arabic Telecom LLM for Operational Excellence
du, the leading telecom and digital services provider, has partnered with Microsoft, Nokia, Khalifa University's 6G Research Center, and the International Telecommunication Union (ITU) to launch a first-of-its-kind Arabic Telecom Large Language Model (LLM). This cross-sector partnership brings together global tech innovation, regional research leadership, and international policy guidance to co-create an AI model that serves critical telecom functions in Arabic—a first in the industry. The du Arabic Telecom LLM is tailored specifically for internal telecom operations and is designed to enhance the efficiency of du's processes while advancing the UAE's vision for sovereign AI breakthrough collaboration introduces an Arabic-language telecom assistant that supports du's internal teams by enabling real-time customer complaint handling, device issue resolution, and intelligent operational insights through culturally fluent and context-aware dialogue. The model is built to transform internal processes while ensuring alignment with linguistic precision and cultural nuances specific to the UAE AlBlooshi, Chief Technology Officer at du, said: " du Arabic Telecom LLM reflects our commitment to improving internal efficiency and customer experiences using advanced, culturally attuned solutions. Together with our esteemed partners, Microsoft, Nokia, Khalifa University and ITU, we are building a future where AI speaks our language, understands our context, and drives real operational transformation and impactful customer exp."Developed in the UAE, du Arabic Telecom LLM reflects the region's language and cultural standards, ensuring accurate and meaningful applications for internal telecom use across national critical infrastructure. Looking forward, this collaboration lays the groundwork for extending the model's capabilities beyond internal operations to include customer-facing functions and multilingual support, paving the way for broad sectoral innovation. du and its partners are dedicated to advancing this Arabic Telecom LLM as a benchmark for localized, responsible AI application in the telecom industry and beyond.