2 days ago
AI: The Overinvestment Bubble Or The Fungible Opportunity?
Tom Traugott, SVP of Emerging Technologies at EdgeCore Digital Infrastructure.
In 1986, Warren Buffett famously outlined his goal "to be fearful when others are greedy and to be greedy only when others are fearful."
Several weeks since Nvidia's annual GTC conference, I can't help but think of this phrase when layering the excitement, growth and optimism of everything that is AI from that week on top of recent heightened economic uncertainty, market panic and fear at consumer and business levels.
At GTC, Nvidia's Jensen Huang captured this tension with some key quotes in his own right.
On the optimistic side: "The more you buy, the more you save. It's even better than that. Now, the more you buy, the more you make." On the disruptive side: "I'm the chief revenue destroyer"—in speaking to the major performance gains of new Blackwell chips over previous Hopper chips, and that older generation chips' value has plummeted as a result (perhaps not that different from the price of a 2024 new car when 2025 gets announced).
On January 27, 2025, markets reacted negatively in response to the wider availability of the Chinese firm DeepSeek's low-cost R1 model, prompting pundits to amplify concerns about an 'AI bubble' of overinvestment, which continues in the midst of economic uncertainties with revenue generation an increasing focal point versus the R&D-like investment in training large models. Yet, the rapid evolution of every AI model, from ChatGPT and Gemini to Grok and Claude, not to mention the promise of an agentic service such as Manus AI, points to only greater innovation and capabilities to come.
What's a data center developer or cloud provider to do? Lessons found from past turbulent times may be instructive, and for that, my mind goes back to 2008 and the Great Financial Crisis.
In tracking the sale of Lehman Brothers' two data centers, Rich Miller, founder and former editor-at-large of Data Center Frontier, pointed out that "the $330 million valuation for the two data centers is also higher than the $250 million valuation of Lehman's North American investment banking and trading unit." This marks for me a turning point for the data center industry. Why?
When the GFC hit in 2008, enterprises began to accelerate the outsourcing of IT infrastructure to third parties, choosing to focus on core business competencies and preserve capital for other uses, such as revenue generation and business expansion, not noncore facilities. Internet and tech companies began leasing data center capacity from the outset, focusing on value creation up the stack.
I'm a believer in the continued value of technology assets—and while the AI boom has put pressure on the adaptive reuse and value of existing facilities available now, the pressure has also heightened the need to dramatically increase the size of investment in data center and energy infrastructure to support what's next. But what is the sweet spot for investment that isn't misguided?
For that, a more recent answer emerges from an article in the MIT Technology Review describing the glut of data centers in China prompted by the launch of ChatGPT in late 2022. To prepare for the AI boom, China mobilized to spur significant investment across the country, and from 2023 to 2024, over 500 new development projects were announced across the country, with 150 built by 2024. A critical insight from the article was that the surge of development came from 144 companies targeting LLM training, but that at the end of 2024, only 10% of those companies continued to focus on LLM development.
What's the lesson here? It's one that was very much present at GTC 2025 and was highlighted by Jensen Huang repeatedly. AI's revenue-generating promise manifests itself in all that is inference—and this is what is instructive about China's overbuild; it was more so in remote locations that were purpose-built for asynchronous training yet lacked the proximity and low-latency connectivity that inference demands.
A panel comprised of top hyperscalers at GTC discussing lessons learned as part of building 100,000-plus GPU clusters generally reached consensus on the right answer: fungibility of infrastructure.
The panel discussed this emerging best practice and explained that a large cluster of GPUs may need to serve multiple purposes, and may end up doing so dynamically, shifting from training to inference in the same cluster. GPU buildouts should therefore include the higher memory and storage that inference requires from the outset, along with the associated power and cooling requirements needed to expand their usability. Furthermore, the data center campus may continue to integrate with traditional cloud services, bringing fungibility forward as a key consideration as well.
So, to be greedy in the current climate of fear, my recommendation is to follow Jensen's advice: Buy more to make more, but make data center investments in locations that provide access to scale, density and low-latency proximity to key population centers and cloud regions. This ensures a multipronged path to revenues, which ultimately is the pragmatic answer in times of economic uncertainty, affirming that investment in AI infrastructure isn't a bubble but a prudent decision to make.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?