logo
Is The AI Boom Headed For Its ‘Dark Fiber' Moment?

Is The AI Boom Headed For Its ‘Dark Fiber' Moment?

Forbes24-03-2025
LAS VEGAS, USA - JANUARY 06: Nvidia CEO Jensen Huang addresses participants at the keynote of CES ... More 2025 in Las Vegas, Nevada, on January 6, 2025. During the presentation, Huang unveiled a range of new chips, software, and services, reinforcing Nvidia's leadership in artificial intelligence computing and its continued innovation across industries. (Photo by Artur Widak/NurPhoto via Getty Images)
The late '90s internet gold rush wasn't just one giant bubble of flashy startups; it was an intertwined cluster of smaller bubbles that fed off each other.
The poster children of the dot-com bust were household names like Pets.com and eToys.com — early e-commerce pioneers. Beneath them were telecoms that burned through hundreds of billions to provide internet access.
But peel back another layer, and you'll find a little-known frenzy that fizzled a few years after the dot-com crash — fiber optic companies that promised to build the rails for those telecoms.
Fiber optic cables weren't a new invention in the 1990s. They've been around since the 1970s, but earlier-generation fiber could transmit only a fraction of the bandwidth they can today.
The breakthrough was a technology called Wavelength Division Multiplexing (WDM), which allowed multiple wavelengths — or data streams — to travel through a single fiber instead of just one. That dramatically increased the bandwidth and cost-efficiency of fiber optic infrastructure.
Ciena was the first company to commercially deploy WDM.
In 1996, it landed a massive $200 million contract with Sprint — then America's third-largest carrier — to upgrade its fiber network to carry 16 wavelengths instead of one. Kevin Kimberlin, co-founder of Ciena, called it 'the real dawn of the internet.'
The advances in fiber optics marked a turning point, convincing investors the internet was real — and that its applications could scale far faster than previously imagined.
After Ciena's IPO in 1997, the stock surged 10x in a few years on projections pointing to a massive explosion in bandwidth. At the time, Forbes reported, 'the total bandwidth of communications systems will triple every year for the next 25 years.'
Meanwhile, fiber optics conferences — once-niche engineering gatherings — became must-attend events for the tech world.
In the lead-up to the dot-com bust, attendance at the Optical Fiber Communications (OFC) conference — the world's largest optics event — grew more than fivefold in just a few years.
Excitement over WDM's potential helped usher in a massive infrastructure boom in the late '90s.
In the five years following the Telecommunications Act of 1996, telecoms poured over $500 billion — mostly financed with debt — into laying fiber optic cables, adding switches, and building wireless networks.
Not only was most of that spending debt-financed, it was also frontloaded, meaning demand hadn't caught up yet.
After the dot-com bust, dreams of internet bandwidth tripling every few months quickly unraveled, exposing massive overinvestment in fiber. As demand for fiber collapsed, so did the companies that built it.
Many fiber firms went bankrupt. Those that survived shed nearly all their pre-dot-com value. Corning — then the world's largest producer of optical fiber — saw its stock crash from nearly $100 in 2000 to around $1 by 2002.
Meanwhile, Ciena's revenue fell from $1.6 billion to $300 million almost overnight, and its stock plunged 98% from its peak.
Both companies are still around today. Although not a household name, Ciena's fiber optics hardware and software remain one of the main building blocks powering the modern internet. But even two decades later, its value hasn't come close to its dot-com peak.
The crash also left behind a glut of unused fiber that was latter dubbed 'dark fiber.' By various estimates, 85% to 95% of fiber laid in the '90s sat unused after the bubble burst.
The dot-com bubble burst, dragging down companies like Ciena, not because the internet was a fad. The internet did change the world. It just didn't happen as fast as its early pioneers promised it would.
Fast forward 25 years, another multi-hundred-billion-dollar infrastructure boom is underway — this time to power AI, a technology once again promised to change the world for good.
Over the past year, Big Tech has gone full throttle on capital expenditures (capex) — much of it aimed at building massive data centers packed with specialized AI chips known as 'AI accelerators.'
America's hyperscalers — Amazon, Google, Meta, and Microsoft — have pledged to spend a record $320 billion on capex this year alone, a massive 40% jump from last year's record-setting $230 billion.
For perspective, that's roughly two-thirds of all inflation-adjusted capital spent on telecom fiber optics during the entire 1990s — over a single year and driven by just four companies.
Meta CEO Mark Zuckerberg threw down his own marker this year. On the company's most recent earnings call, he unveiled plans for Meta's largest AI data center yet the size of Manhattan.
'I announced last week that we expect to bring online almost 1GW of capacity this year, and we're building a 2GW — and potentially bigger — AI data center so large it could cover a significant part of Manhattan,' he said.
But it's not just the hyperscalers racing to build AI factories.
The Stargate Project — a $500 billion initiative first proposed during the Trump administration and now backed by OpenAI, SoftBank, Oracle, and MGX — aims to develop a nationwide network of AI data centers and energy facilities.
Then there's the United Arab Emirates with a recently unveiled plan to invest $1.4 trillion in U.S. AI infrastructure in partnership with BlackRock, Nvidia, and Microsoft.
At the core of all this multi-trillion-dollar spending is one single goal: build as many AI-ready data centers as possible before the competition beats you to it.
As Nvidia CEO Jensen Huang said during his 2025 GTC keynote, hand-coded software running on general-purpose CPUs is giving way to machine learning models powered by data centers filled with GPUs and AI accelerators — or as Huang calls them, 'AI factories.'
This paradigm shift, according to Huang and many experts, will demand orders of magnitude more computing power.
But just like in the 1990s — when no one could accurately predict how fast the internet would grow — no one really knows how fast AI will be adopted. And much like the fiber buildout of that era, today's multi-trillion-dollar AI spending is FOMO-driven and frontloaded on the assumption that AI's exponential growth curve will hold.
Even Big Tech execs admit as much.
No line captures this 'blank check' AI spending better than Google CEO Sundar Pichai's remark during Alphabet's earnings call last August: 'When you go through a curve like this, the risk of underinvesting is dramatically greater than the risk of overinvesting.'
Nvidia isn't the only company developing AI-specific hardware and software, but it's by far the biggest — thanks to its near-impenetrable moat in GPUs and AI accelerators. By various estimates, including from Mizuho Securities, Nvidia holds between 70% and 95% of the AI chip market.
That dominance is reflected in Nvidia's market cap. Since the release of ChatGPT — when the world suddenly realized AI is real — Nvidia's value has ballooned from $400 billion to $2.8 trillion, making it the third-largest company in the world.
But this isn't just about hype. Nvidia's sales exploded from $27 billion in 2022 to $130 billion in 2024, while profits rose from $9.8 billion to $72.9 billion over the same period.
At the 2025 GTC event, Nvidia CEO Jensen Huang said Nvidia shipped 1.3 million Hopper GPUs during peak sales to top cloud providers — AWS, Microsoft, Google, and Oracle. For comparison, Nvidia's next-gen Blackwell chips already hit 3.6 million units in their first year.
And just like Hopper, Blackwell is already sold out until October, according to the company's Q3 2024 earnings.
Wall Street doesn't expect the sales to stop here. Nvidia's current valuation is pricing in 56.7% revenue growth this year, pushing projected sales to just over $200 billion. In 2026, Wall Street analysts expect another $50 billion on top of that.
That's a lot of chips to sell, but Huang says we haven't even hit peak data center spending yet.
According to Dell'Oro Group — a market research firm Huang cited at GTC 2025 — annual data center capex could top $1 trillion within five years, largely driven by U.S. hyperscalers.
'We project that data center infrastructure spending could surpass $1 trillion annually within five years. While AI spending has yet to meet desired returns and efficiency improvements, long-term growth remains assured, driven by hyperscalers' multi-year capex cycles and government initiatives such as the $500 billion Stargate Project,' said Baron Fung, Senior Research Director at Dell'Oro Group.
Why do we need so much computing power, considering AI — like any technology — should become more efficient over time? After all, 400 million people use ChatGPT, and it works just fine on legacy data centers.
It's because AI is getting smarter. We've entered what Huang calls the 'agentic AI' era — where models reason, plan, verify results, and solve problems step by step. That means burning through way more tokens.
To illustrate, Huang showed two models solving the same task: creating a wedding seating arrangement for 300 guests, factoring in tradition, photogenic angles, and family tensions.
A traditional LLM (Llama 3 in this case) solved it in under 500 tokens but missed key social dynamics. DeepSeek's R1, a Chinese 'budget' reasoning model, nailed the task after a multi-step analysis but used 8,859 tokens.
'Now we have AIs that can reason step by step by step using a technology called chain of thought, best of end, consistency checking, a variety of different path planning, a variety of different techniques, we now have AI's that can reason,' Huang said.
Ironically, it was DeepSeek R1 that sowed fears AI might require less compute, but Huang believe those fears were overblown.
'Last year, this is where almost the entire world got it wrong,' Huang said, referring to the DeepSeek scare. He argued that AI now requires 100 times more compute than it did a year ago, largely due to reasoning models like DeepSeek R1.
'The computation requirement, the scaling law of AI is more resilient and, in fact, hyperaccelerated. The amount of computation we need at this point as a result of agentic AI, as a result of reasoning, is easily a hundred times more than we thought we needed this time last year.'
Some industry insiders go as far as saying AI is moving so fast it might break Moore's Law.
'The amount of compute AI needs is doubling every three months,' said Nick Harris, CEO of Lightmatter, a $4.4 billion chipmaker. 'That's far faster than Moore's Law. It's going to break companies and economies.'
Ross Sandler, who leads internet sector research at Barclays, thinks current capex spending may fall short of what compute-hungry models will need.
'Consensus estimates currently call for a little over $300 billion in total capex for these hyperscalers, about $100 billion of which is going into the AI semiconductor portion of that capex… that's probably not enough to satisfy the overall compute required.'
Sandler's team projects AI workloads could quadruple between 2025 and 2028. By then, inference — or AI reasoning — might make up more than half of all industry compute.
There's just one catch: that projection assumes AI models will continue burning through tokens like they do today.
On January 20, a then-obscure Chinese AI company released DeepSeek R1 — an open-source AI model that could reason on par with what was previously considered incomparable to OpenAI's models.
National news spun it as a Sputnik moment for America — a global superpower beating the U.S. in the next frontier of technology — but Wall Street got spooked by something else.
In the released paper, DeepSeek showed how it trained models at a fraction of OpenAI's costs — and using old-generation chips at that.
That was a wake-up call that the economics of AI training might be overblown. What followed was what Wall Street dubbed AI's 'Black Monday' — a $600 billion sell-off in Nvidia, marking the biggest single-day crash for a U.S. company in history.
At the core of the DeepSeek scare are reasoning AI models — the kind Nvidia spotlighted at the last GTC conference. Although they use more tokens, as Huang noted, these models are often much easier to train.
What DeepSeek showed is that AI models like R1 don't rely on brute-force scale. Instead, they learn how to think and know where to look for data rather than memorizing everything. That can drastically reduce the cost of AI training.
'The idea of spending $10 billion on a pre-training run on the next base model, to achieve very little incremental performance, would likely change,' Sandler's Barclays analysts wrote in a recent note.
Not only that, new AI models are becoming more efficient. Gil Luria, Davidson's head of technology research, told me that DeepSeek — while serving as much as one-third of OpenAI's queries — is 10x more efficient while using old-generation Nvidia chips.
'On the final day of releases, DeepSeek disclosed that its inference stack, running on just 2,224 H800s, is serving 608B tokens per day, ~1/3 of what OpenAI is doing, but on a small fraction of the infrastructure. The number of tokens they are able to squeeze out of each H800 is roughly an order-of-magnitude improvement over leading open-source models running on H100s, and it highlights this emerging reality that algorithmic/architecture-level optimizations are now an equally critical frontier of development,' his analysts wrote in a note.
These efficiencies, while good for the advancement of AI technology and its adoption, aren't what chipmakers like Nvidia want to see — because a lot of its growth is banking on continuous investment in capex by major tech players.
UBS estimates that nearly all of Nvidia's data center revenue growth this year and next is tied to hyperscaler capex spending.
Then there's the question of ROI because the economics of AI still don't quite add up. Even OpenAI isn't breaking even on compute costs, according to CEO Sam Altman, despite charging $200 a month for its pro-tier plan.
By Luria's estimates, America's largest software companies — including OpenAI, Salesforce and Adobe — have only generated a total of $10–20 billion in revenue from their AI products.
'The reality is that the revenue from selling actual AI products to a customer is still minuscule compared to the magnitude of the capex invested… We have invested almost half a trillion dollars and are still only getting around $10–20 billion of actual product sales off of that investment,' Luria told me.
Meanwhile, MIT Institute Professor and Nobel Laureate Daron Acemoglu predicts that AI will add just 1% to GDP growth and affect only 5% of jobs in the U.S. over the next 10 years.
He's not saying the technology lacks potential. In fact, he thinks it can replace 20% of jobs in the U.S. workforce — assuming replacement costs are excluded. But once you factor in those costs, only 5% of tasks can be profitably automated in the next decade.
Not only that, he notes that those profitably replaceable tasks are mostly concentrated in small and mid-sized businesses — not in Big Tech plowing hundreds of billions into AI.
Nobody doubts AI will change the world. The real question isn't whether it's a fad, but whether it can grow fast enough to close the gigantic mismatch between capex spending and expected future revenue.
After all, the poster children of the dot-com crash weren't wrong. We do buy pet food and toys online — and much more. They were just too early.
Chipmakers powering AI aren't fiber-optic companies. And the AI revolution isn't the internet revolution. As Mark Twain said, history doesn't repeat itself; it rhymes. But the semantics that have driven both spending sprees bear a striking resemblance.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Pension Funds Missing Tech Rally Turn to Completion Portfolios
Pension Funds Missing Tech Rally Turn to Completion Portfolios

Yahoo

time13 minutes ago

  • Yahoo

Pension Funds Missing Tech Rally Turn to Completion Portfolios

(Bloomberg) -- Some pension funds are waking up to a harsh reality: they've been left behind by the market's hottest rally. Investors are discovering they're underexposed to names like Nvidia Corp. and Microsoft Corp. — both of which recently hit record highs. The shortfall traces back to active managers, who often sidestep expensive tech stocks in search of other opportunities. Why New York City Has a Fleet of New EVs From a Dead Carmaker Trump Takes Second Swing at Cutting Housing Assistance for Immigrants Chicago Schools Seeks $1 Billion of Short-Term Debt as Cash Gone A London Apartment Tower With Echoes of Victorian Rail and Ancient Rome Now they're shifting course, with the help of so-called 'completion portfolios,' tailored strategies that help plug gaps in exposure. Demand for these vehicles is on the rise, according to asset managers Pacific Investment Management Co., Russell Investments Group, and Australia's Queensland Investment Corp., which together oversee $2.5 trillion and offer the service. 'We have seen a marked increase from our clients adopting new completion portfolio solutions the last 18 months' said Nick Zylkowski, co-head of customized portfolio solutions at Russell Investments. 'Portfolio analytics that measure risk across the entire portfolio are critical to the decision making.' These portfolios are gaining traction as markets become more concentrated, pressuring institutional investors to rethink long-held caution or risk falling further behind. The Magnificent Seven now make up over 30% of the S&P 500 index, up from 10% a decade ago. Surging valuations for the group have powered US stocks in prior years, in turn amplifying the effect of pullbacks like that seen in the past week. The strategy involves pension systems pooling together their various managers' holdings, measuring where the combined portfolio falls short of a benchmark, and then uses a separate sleeve — often derivatives or baskets of stocks — to fill in the missing exposures. The idea is not to chase returns but to cut the risk of drifting too far from the market itself. Melbourne-based Mercer Superannuation Australia Ltd. is one pension that has leaned into the strategy to avoid underperformance in the past fiscal year. 'When we look across the universe of active global equity managers, we find that the overwhelming majority have been materially underweight to these large US technology companies,' said Mercer Chief Investment Officer Graeme Miller, who manages A$74 billion ($48 billion) in assets. LegalSuper, which has A$7 billion in assets, uses completion portfolios to hedge concentrated exposures. Relying on external active managers usually 'means that you're underweight the big mega caps,' said interim CIO Andrew Lill. 'As a result, there's an increasing need to reduce active risk,' through completion portfolios, he said. Still, they're not a cure-all. AustralianSuper, the country's largest pension with A$388 billion under management, uses completion portfolios but still blamed underweight exposure to megacap US tech in externally managed funds for lackluster returns last year. Others avoid them altogether. 'There are some great alpha opportunities out there,' said Mark Rider, chief investment officer of Brighter Super, a A$35 billion fund, according to their website. A completion portfolio would 'override' their contribution, he said. Benchmark Mismatches The strategy is also gaining traction in fixed income. Active bond managers are preferring corporate debt for higher yields, which creates a mismatch against benchmarks, according to Stuart Simmons, head of multi-asset solutions in the Liquid Markets Group for Queensland Investment Corp. As a result, more large investors are using completion portfolios to load up on US Treasuries exposure, Simmons added. Other investors have turned to the portfolios to manage risk across asset classes, aligning exposure to growth proxies in stocks, bonds and commodities, said Sam Watkins, who heads business in Australia and New Zealand at Pimco. 'What has changed is that it was a very narrow subset that we were dealing with in the past, and that now has broadened into a much larger group,' Watkins said, referring to the use of the strategy. (Updates fifth paragraph to show recent pullback in tech stocks. An earlier version corrected the spelling of Stuart Simmons) Foreigners Are Buying US Homes Again While Americans Get Sidelined What Declining Cardboard Box Sales Tell Us About the US Economy Women's Earnings Never Really Recover After They Have Children Survived Bankruptcy. Next Up: Cultural Relevance? Americans Are Getting Priced Out of Homeownership at Record Rates ©2025 Bloomberg L.P. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store