logo
#

Latest news with #ColetteKress

Zacks Investment Ideas feature highlights: NVIDIA, Meta Platforms and Tesla
Zacks Investment Ideas feature highlights: NVIDIA, Meta Platforms and Tesla

Globe and Mail

timea day ago

  • Business
  • Globe and Mail

Zacks Investment Ideas feature highlights: NVIDIA, Meta Platforms and Tesla

For Immediate Release Chicago, IL – May 30, 2025 – Today, Zacks Investment Ideas feature highlights NVIDIA NVDA, Meta Platforms META and Tesla TSLA. NVIDIA Earnings: 3 Giga-Takeaways As always, NVIDIA earnings conference calls are almost as exciting as their technology conferences. We look for certain surprises going in, and we always get more than we bargained for. My focus here is on three big areas: GPU Demand, AI Factories, and the China Predicament. Headed into yesterday's reveal, I was looking for two items in particular: First, would additional clarity around company write-downs for lost H20 GPU sales to China be as well-received as it had been when first announced and NVDA shares rallied 40% since mid-April? The answer was a resounding "yes" because CEO Jensen Huang and CFO Colette Kress had already explained last month that the hit would be substantial at $15 billion for the first half of the year. All they had to do was confirm that guidance and explain that Q1 took a larger hit of $7 billion (including inventory) and that Q2 would bear $8 billion. Jensen also spent considerable time on the call putting the China predicament into perspective because it is a potential addressable market for NVIDIA of $50 billion. More on that coming up. Second, would we hear of a significant ramp in deliveries in the new flagship GB200 NVL72 rack systems that enterprises are eager to deploy -- since their average selling price is over $3 million? I had given some very specific commentary to TechCrunch+ senior writer Becca Szkutak on this topic last week... Why export restrictions aren't the only thing to pay attention to in Nvidia's earnings Since word was that NVIDIA delivered 1,500 GB200 architectures in April (last month of Q1 FY2026), we should expect no less than 5,000 units projected for Q2. And if we hear something over 10,000 units, investors should be very pleased and bullish. The resoundingly bullish answer here came from Colette in her opening remarks... On average, major hyperscalers are each deploying nearly 1,000 NVL 72 racks -- or 72,000 Blackwell GPUs per week -- and are on track to further ramp output this quarter. Microsoft, for example, has already deployed tens of thousands of Blackwell GPUs and is expected to ramp to hundreds of thousands of GB200s with OpenAI as one of its key customers. Key learnings from the GB200 ramp will allow for a smooth transition to the next phase of our product road map, Blackwell Ultra. 1,000 GB200s per week and growing sounds like a very strong quarter indeed. With a solid 13 weeks in Q2, we could potentially see over 15,000 units shipped! That's very bullish. NVDA shares popped up to $144 in after-hours on that tidbit and look poised to open there this morning. I seen new highs above $150 soon because the company guide for only $45 billion this quarter is very conservative. By my math, they could do that in GB200 NVL72 alone: 15,000 x $3 million = $45 billion. Behold the AI Factory Last week after Jensen's keynote at Computex in Taiwan, I felt compelled to write an article explaining why he is using the phrase "AI factories" so much this year... NVIDIA AI Factories: More Than Clever Marketing? In Colette's opening remarks, she confirmed what I was seeing. Let me set the context first with this utterance: "Our customers' commitments are firm." She said this while talking about Datacenter revenue growth of 73% and "AI factory build outs... driving significant revenue." The reason I think this is important is because we are talking about significant capital expenditure for these companies, given the $3 million sticker. So it's natural for investors and analysts to wonder how sustainable this demand trend is. The reality is that the "hyperscalers" including Cloud Service Providers (CSPs) Microsoft Azure, Amazon AWS, Google Cloud, and Oracle Cloud -- plus Meta Platforms, Tesla, and OpenAI -- have insatiable demand for NVIDIA GPUs and advanced rack systems like the GB200 NVL72. They are all in a massive build-out phase that will last for two to three years. Just think how few companies are in the position of NVIDIA to say "our customers' commitments are firm" for such capex over multi-quarter periods. Even a single company, Tesla, can make the argument for AI Factories because they need accelerated, hyperscale compute to train and operate FSD (full self-driving) cars, Grok AI and other xAI initiatives, and Optimus humanoid robots. Jensen believes robotics will be a multi-trillion-dollar industry. He also sums up the ramping demand in these few words: "Reasoning models are driving a step function surge in inference demand." For NVIDIA, this insatiable demand is like having government contracts galore! And we haven't even talked about sovereign AI adoption yet. Like the iPhone Cycle, But Better The next upgrade of Blackwell is going to start shipping this quarter too. Again from Colette... "Sampling of GB300 systems began earlier this month at the major CSPs, and we expect production shipments to commerce later this quarter." GB300 will leverage the same architecture, same physical footprint, and the same electrical and mechanical specifications as GB200. More importantly, the GB300 drop-in design will allow CSPs to seamlessly transition their systems and manufacturing used for GB200 while maintaining high yields in performance and memory. This has been a key element of the NVIDIA product roadmap and annual cadence: everything works together and nothing becomes obsolete. This is why I say that "it's like the iPhone cycle and Apple ecosystem -- but better." NVIDIA can sell better, faster, and more expensive GPUs to their customers every year because everything is seamless and has what I call "multiplicative integration." In other words, systems get better because it can all be upgraded with constantly improving and expanding CUDA software libraries. I first noticed this with the transition from Grace Hopper (GH) systems to Grace Blackwell (GB) last year. Colette: "For example, we increased the inference performance of Hopper by four times over two years. This is the benefit of NVIDIA's programmable CUDA architecture and rich ecosystem." AI Factories Aren't Just for ChatGPT In my AI factories article, I left out one customer that Jensen often talks about: the nation-state. He believes that every country will learn they need to control their own data, and not just for security reasons. In the AI economy, where there is knowledge and intelligence, there is potential wealth. So every country should be seeking to harness their data -- about their land and resources, their people, their economy, and their potential -- and be able to "mine and model" it for maximum value. Robotics companies and enterprises deploying agentic reasoning models already know the power of simulation and synthetic data training. Ironically, it was a car company, BMW, who first deployed NVIDIA Omniverse to help them design new factory operations using "digital twins." Now nation-states will begin experimenting with their data to solve their problems across land, resources, urban planning, agriculture, education, medicine, science, transportation, materials, and supply chains -- all to make the lives of their people better and lift more out of poverty. From Colette's remarks... And more AI factory projects are starting across industries and geographies. NVIDIA's full stack architecture is underpinning AI factory deployments as industry leaders like AT&T, BYD, Capital One, Foxconn, MediaTek, and Telenor are strategically vital sovereign clouds like those recently announced in Saudi Arabia, Taiwan, and The UAE. We have a line of sight to projects requiring tens of gigawatts of NVIDIA AI infrastructure in the not too distant future. She also noted the pace and scale of AI factory deployments with nearly 100 NVIDIA powered AI factories taking off this quarter, a two-fold increase vs. last year with the average number of GPUs powering each factory also doubling in the same period. Jensen: Why We Need to Sell to China About 2/3 of Jensen's opening remarks were about export controls and China AI development. "On export control, China is one of the world's largest AI markets and a springboard to global success. With half of the world's AI researchers based there, the platform that wins China is positioned to lead globally." With the current White House policy, the $50 billion China market is effectively closed to US industry. I think Jensen's views are very important here and it's a nuance you don't hear most investors and analysts discuss. Here's my summary: The US weakens its position with China by withholding technology because it forces them to develop all their own -- which they will -- instead of allowing them to become dependent on our infrastructure standards and leadership. Now here are four key paragraphs from Jensen in his own words... China's AI moves on with or without US chips. It has to compute to train and deploy advanced models. The question is not whether China will have AI. It already does. The question is whether one of the world's largest AI markets will run on American platforms. Shielding Chinese chipmakers from US competition only strengthens them abroad and weakens America's position. Export restrictions have spurred China's innovation and scale. The AI race is not just about chips. It's about which stack the world runs on. As that stack grows to include 6G and quantum, US global infrastructure leadership is at stake. The US has based its policy on the assumption that China cannot make AI chips. That assumption was always questionable, and now it's clearly wrong. China has enormous manufacturing capability. In the end, the platform that wins the AI developers, wins AI. Export controls should strengthen US platforms, not drive half of the world's AI talent to rivals. US platforms must remain the preferred platform for open source AI. That means supporting collaboration with top developers globally, including in China. America wins when models like DeepSeek and Qwen runs best on American infrastructure. When popular models are trained and optimized on US platforms, it drives usage, feedback, and continuous improvement, reinforcing American leadership across the stack. (end of excerpts from Jensen's opening remarks) With NVDA shares looking for a brighter open this morning above $140, I would look for any initial selling as a buying opportunity. It's clear that the demand trends will push the company into many successive quarters of $50 billion in sales, starting with this one. As I projected at the end of last year, NVIDIA is on a path to $500 billion in annual revenues over the next five years. Now that you understand the "AI factory" infrastructure concept, this is not a stretch. It's actually only 38% compound annual growth for 5 years. That's very doable in the AI Economy where NVIDIA is the premier provider of its picks and shovels (GPU innovations and networking hardware) and its brains and dreams (CUDA, Omniverse, and Cosmos). And thus my longer-term forecast for NVIDIA to become one of the few companies with $1 trillion in sales sometime in the 2030s. That will probably make it the first $5 trillion company by market cap sometime before that decade begins. Why Haven't You Looked at Zacks' Top Stocks? Since 2000, our top stock-picking strategies have blown away the S&P's +7.7% average gain per year. Amazingly, they soared with average gains of +48.4%, +50.2% and +56.7% per year. Today you can access their live picks without cost or obligation. See Stocks Free >> Media Contact Zacks Investment Research 800-767-3771 ext. 9339 support@ Past performance is no guarantee of future results. Inherent in any investment is the potential for loss. This material is being provided for informational purposes only and nothing herein constitutes investment, legal, accounting or tax advice, or a recommendation to buy, sell or hold a security. No recommendation or advice is being given as to whether any investment is suitable for a particular investor. It should not be assumed that any investments in securities, companies, sectors or markets identified and described were or will be profitable. All information is current as of the date of herein and is subject to change without notice. Any views or opinions expressed may not reflect those of the firm as a whole. Zacks Investment Research does not engage in investment banking, market making or asset management activities of any securities. These returns are from hypothetical portfolios consisting of stocks with Zacks Rank = 1 that were rebalanced monthly with zero transaction costs. These are not the returns of actual portfolios of stocks. The S&P 500 is an unmanaged index. Visit for information about the performance numbers displayed in this press release. 5 Stocks Set to Double Each was handpicked by a Zacks expert as the #1 favorite stock to gain +100% or more in 2024. While not all picks can be winners, previous recommendations have soared +143.0%, +175.9%, +498.3% and +673.0%. Most of the stocks in this report are flying under Wall Street radar, which provides a great opportunity to get in on the ground floor. Today, See These 5 Potential Home Runs >> Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Meta Platforms, Inc. (META): Free Stock Analysis Report

Nvidia's most important number doesn't have a dollar sign in front of it: Token growth
Nvidia's most important number doesn't have a dollar sign in front of it: Token growth

Business Insider

timea day ago

  • Business
  • Business Insider

Nvidia's most important number doesn't have a dollar sign in front of it: Token growth

Nvidia 's revenue climbed to $44.1 billion last quarter, yet one of the chip giant's most important metrics doesn't have a dollar sign in front of it. "OpenAI, Microsoft, and Google are seeing a step function leap in token generation," said Nvidia CFO Colette Kress on the company's Wednesday earnings call. "Microsoft processed over 100 trillion tokens in Q1, a fivefold increase on a year-over-year basis," she continued. Throughout May, top tech executives — many of whom are also some of Nvidia's largest customers — have been boasting about their token growth. Though somewhat hard to track from outside the AI cloud or foundation model companies themselves, tokens are the base unit to measure AI inputs and outputs. They include pixels, word segments, or audio. But no matter the content, all AI breaks down into tokens. As AI tools mature, the number of tokens generated for AI outputs, or inference, is growing faster than many expected. When Google CEO Sundar Pichai said monthly tokens produced across Google's products had increased by a factor of 50 in the last year, the Google I/O audience gasped. "Explosive token growth is what really matters, in the longer term," wrote Morgan Stanley analysts ahead of Nvidia's latest earnings call, after which Nvidia's share price climbed to $134. Why tokens matter Forrester analyst Alvin Nguyen explained that it's not a perfect metric, since tokens can vary in size based on the form of content they represent. "It isn't a clear way to make apples-to-apples comparisons, but it is the closest to a standard that we have without needing more data and analytics," Nguyen said. Nvidia CEO Jensen Huang sees the rise of the "token" in the conversations of top tech executives as a sign that AI tools are providing value. "Where companies are starting to talk about how many tokens they produced last quarter and how many tokens they produced last month. Very soon we'll be talking about how many tokens we produce every hour, just as every single factory does," Huang said at Computex in Taiwan last week. With its dominant market share, Nvidia directly benefits from almost all token growth. Some analysts say the demand is growing faster than the current data center stock can handle. "Every hyperscaler has reported unanticipated strong token growth," the Morgan Stanley analysts wrote, adding that plentiful anecdotal evidence of more inference demand than existing infrastructure can support strengthened their conviction. The problem with tokens is that there's no easy way to gauge their growth unless companies release numbers. But it's safe to say that when executives share any indication of demand for tokens, investors will be paying close attention for the foreseeable future.

Nvidia earnings top expectations despite United States export curbs
Nvidia earnings top expectations despite United States export curbs

Qatar Tribune

time2 days ago

  • Business
  • Qatar Tribune

Nvidia earnings top expectations despite United States export curbs

Agencies Chip giant Nvidia on Wednesday reported earnings that surpassed market expectations, with a $4.5 billion hit from U.S. export controls being less than the company had feared. However, Nvidia CFO Colette Kress warned in an earnings call that export constraints are expected to cost the AI chip titan about $8 billion in the current quarter. In April, Nvidia notified regulators that it expected a $5.5 billion hit in the recently ended quarter due to a new U.S. licensing requirement on the primary chip it can legally sell in China. U.S. officials had told Nvidia that it must obtain licenses to export its H20 chips to China because of concerns they may be used in supercomputers there, the company said in a Securities and Exchange Commission (SEC) filing. The new licensing rule applies to Nvidia graphics processing units, or GPUs, with bandwidth similar to that of the H20. 'China is one of the world's largest AI markets and a springboard to global success,' Nvidia chief executive Jensen Huang said in an earnings call. 'The platform that wins China is positioned to lead globally; however, the $50 billion China market is effectively closed to us.' Nvidia cannot dial back the capabilities of its H20 chips any further to comply with U.S. export constraints, winding up forced to write off billions of dollars on inventory that can't be sold or repurposed, according to Huang. 'The U.S. has based its policy on the assumption that China cannot make AI chips,' Huang said.'That assumption was always questionable and now it's clearly wrong.' China's AI is moving on without Nvidia technology, while that country's chip-makers innovate products and ramp up operations, according to Huang. 'The question is not whether China will have AI; it already does,' he said. 'The question is whether one of the world's largest markets will run on American platforms.' The new requirements resulted in Nvidia incurring a charge of $4.5 billion in the quarter, associated with H20 excess inventory and purchase obligations, 'as demand for H20 diminished,' the chipmaker said in an earnings report. U.S. export constraints stopped Nvidia from bringing in an additional $2.5 billion worth of H20 revenue in the quarter, according to the said it made a profit of $18.8 billion on revenue of $44.1 billion, causing shares to rise more than 4% in after-market trades. Hot demand Huang said demand for the company's AI-powering technology remains strong, and a new Blackwell NVL72 AI supercomputer referred to as a 'thinking machine' is in full-scale production. 'Countries around the world are recognizing AI as essential infrastructure – just like electricity and the internet – and Nvidia stands at the center of this profound transformation,' Huang said. Nvidia high-end GPUs are in hot demand from tech giants building data centers to power artificial intelligence. The company said its data center division revenue in the quarter was $39.1 billion, up 10% from the same period last the market had expected more from the unit, 'Nvidia beat expectations again, but in a market where maintaining this dominance is becoming more challenging,' said Emarketer analyst Jacob Bourne.'The China export restrictions underscore the immediate pressure from geopolitical headwinds, but Nvidia also faces mounting competitive pressure as rivals like AMD gain ground,' said Emarketer analyst Jacob Bourne. Revenue in Nvidia's gaming chip business hit a record high of $3.8 billion, leaping 48% and eclipsing forecasts. The AI boom has propelled Nvidia's stock price, which has regained much of the ground lost in a steep sell-off in January triggered by the sudden success of DeepSeek. China's DeepSeek unveiled its R1 chatbot, which it claims can match the capacity of top U.S. AI products for a fraction of their costs. 'The broader concern is that trade tensions and potential tariff impacts on data center expansion could create headwinds for AI chip demand in upcoming quarters,' analyst Bourne said of Nvidia.

ETFs to Buy After NVIDIA's Q1 Earnings Miss, Record Revenues
ETFs to Buy After NVIDIA's Q1 Earnings Miss, Record Revenues

Yahoo

time2 days ago

  • Business
  • Yahoo

ETFs to Buy After NVIDIA's Q1 Earnings Miss, Record Revenues

NVIDIA NVDA reported mixed first-quarter fiscal 2026 results. Though the AI darling lagged earnings estimates, it reported record-breaking revenues, which topped estimates. NVIDIA shares jumped as much as 6% in after-hours trading. Investors seeking to tap the company's growth could invest in ETFs having the largest allocation to the AI chipmaker. Strive U.S. Semiconductor ETF SHOC, VanEck Vectors Semiconductor ETF SMH, VanEck Fabless Semiconductor ETF SMHX, YieldMax Target 12 Semiconductor Option Income ETF SOXY and Columbia Select Technology ETF SEMI could be compelling options. The company's earnings per share were 81 cents for the first quarter, missing the Zacks Consensus Estimate by 4 cents and up from 61 cents reported in the year-ago quarter. This marked an end to nine straight quarters of earnings beats. Revenues surged 69% year over year to a record $44.1 billion and beat the consensus mark of $42.70 billion. The impressive performance was largely driven by a booming data center business. The blockbuster results were driven by incredible demand for NVIDIA's latest AI chips. Data Center revenues, which account for much of NVIDIA's revenues, jumped 73% year over year to $39.1 billion (read:NVIDIA Reclaims $3 Trillion: ETFs to Bet On).The gaming division also performed strongly, with revenues climbing 42% year over year to $3.8 billion. This growth was bolstered by the launch of the Nintendo Switch 2, which features NVIDIA's chips and AI-powered DLSS technology supporting up to 4K gaming. NVIDIA's graphics processing capabilities, historically focused on gaming, are now increasingly used in AI applications, highlighting the broadening utility of its automotive and robotics segment saw a 72% revenue increase, reaching $567 million. Growth in this area was driven by rising demand for self-driving car chips and robotics software, including a significant advance in humanoid robotics. The company introduced Isaac GR00T N1 — the world's first open humanoid robot foundation model — and outlined plans to deepen its involvement in robotics demand for NVIDIA's artificial intelligence (AI) chips, especially for large cloud providers and AI supercomputing, continues to surge. NVIDIA is building factories in the United States and working with its partners to produce AI supercomputers. NVIDIA CEO Jensen Huang said, "Countries around the world are recognizing AI as essential infrastructure – just like electricity and the internet – and NVIDIA stands at the center of this profound transformation." Its chief financial officer, Colette Kress, said that Microsoft has 'deployed tens of thousands of Blackwell GPUs and is expected to ramp to hundreds of thousands' of the company's GB200 product, due largely to its partnership with is also accelerating its global expansion. It recently announced plans to build AI factories in the United States and Saudi Arabia and launched the Stargate UAE AI infrastructure cluster in Abu Dhabi. Furthermore, NVIDIA has expanded collaborations with major cloud providers, including Oracle, Google, and Microsoft. Its Blackwell-based cloud instances are now available on AWS, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure (read: Stocks & ETFs to Benefit From Trump's Stargate Project).Looking ahead to the second quarter of fiscal 2026, the graphics chipmaker expects revenues of $45 billion, plus or minus 2%. The Zacks Consensus Estimate is pegged at $45.1 billion. This guidance includes an estimated $8 billion hit from H20 export restrictions, largely impacting sales to China. The AI darling has lost billions in revenues from Trump's ban on its chip exports to China. Despite this, NVIDIA remains confident in the ongoing global demand for its AI infrastructure. Strive U.S. Semiconductor ETF (SHOC)Strive U.S. Semiconductor ETF seeks broad market exposure to the U.S. semiconductor sector. It follows the Bloomberg US Listed Semiconductors Select Total Return Index and holds 32 stocks in its basket, with NVIDIA accounting for the top firm at 22.9%. Strive U.S. Semiconductor ETF has an AUM of $81.8 million and charges 40 bps in annual fees. It trades in a volume of 10,000 shares per day on average and sports a Zacks ETF Rank #1 (Strong Buy).VanEck Vectors Semiconductor ETF (SMH)VanEck Vectors Semiconductor ETF offers exposure to companies involved in semiconductor production and equipment. It follows the MVIS US Listed Semiconductor 25 Index, which tracks the most liquid companies in the industry based on market capitalization and trading volume. VanEck Vectors Semiconductor ETF holds 26 stocks in its basket, with NVIDIA occupying the top position at 21.1%. It has managed assets worth $22 billion and charges 35 bps in annual fees and expenses. VanEck Vectors Semiconductor ETF trades in an average daily volume of 6 million shares and flaunts a Zacks ETF Rank # Fabless Semiconductor ETF (SMHX)VanEck Fabless Semiconductor ETF offers exposure to companies involved in semiconductor production and is classified as a fabless. It follows the MarketVector US Listed Fabless Semiconductor Index and holds 23 stocks in its basket. NVIDIA takes the top spot at 20.6% share. SMHX, which debuted in the space in late August, has accumulated $47.8 million in its asset base. VanEck Fabless Semiconductor ETF charges 35 bps in annual fees and trades in a volume of 40,000 shares. It flaunts a Zacks ETF Rank # Target 12 Semiconductor Option Income ETF (SOXY) YieldMax Target 12 Semiconductor Option Income ETF is an actively managed ETF that seeks a target annual income level of 12% and capital appreciation via direct investments in a select portfolio of semiconductor companies. NVIDIA occupies the top position in the portfolio with a 19.5% share. YieldMax Target 12 Semiconductor Option Income ETF debuted in December and has gathered $6 million in its asset base. It charges 99 bps in annual fees and trades in an average daily volume of 3,000 Select Technology ETF (SEMI) Columbia Select Technology ETF is an actively managed ETF that focuses on semiconductor and semiconductor-related businesses that may be poised to benefit from technology innovation and disruption. It follows the S&P Global 1200 Information Technology Index and holds 35 stocks in its basket, with NVIDIA occupying the second position at 17.2%. Columbia Select Technology ETF has amassed $37.3 million in its asset base and trades in an average daily volume of 4,000 shares. It charges 75 bps in fees per year. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report NVIDIA Corporation (NVDA) : Free Stock Analysis Report VanEck Semiconductor ETF (SMH): ETF Research Reports Columbia Select Technology ETF (SEMI): ETF Research Reports Strive U.S. Semiconductor ETF (SHOC): ETF Research Reports VanEck Fabless Semiconductor ETF (SMHX): ETF Research Reports This article originally published on Zacks Investment Research ( Zacks Investment Research

NVIDIA Earnings: 3 Giga Takeaways
NVIDIA Earnings: 3 Giga Takeaways

Globe and Mail

time2 days ago

  • Business
  • Globe and Mail

NVIDIA Earnings: 3 Giga Takeaways

As always, NVIDIA ( NVDA ) earnings conference calls are almost as exciting as their technology conferences. We look for certain surprises going in, and we always get more than we bargained for. My focus here is on three big areas: GPU Demand, AI Factories, and the China Predicament. Headed into yesterday's reveal, I was looking for two items in particular: First, would additional clarity around company write-downs for lost H20 GPU sales to China be as well-received as it had been when first announced and NVDA shares rallied 40% since mid-April? The answer was a resounding "yes" because CEO Jensen Huang and CFO Colette Kress had already explained last month that the hit would be substantial at $15 billion for the first half of the year. All they had to do was confirm that guidance and explain that Q1 took a larger hit of $7 billion (including inventory) and that Q2 would bear $8 billion. Jensen also spent considerable time on the call putting the China predicament into perspective because it is a potential addressable market for NVIDIA of $50 billion. More on that coming up. Second, would we hear of a significant ramp in deliveries in the new flagship GB200 NVL72 rack systems that enterprises are eager to deploy -- since their average selling price is over $3 million? I had given some very specific commentary to TechCrunch+ senior writer Becca Szkutak on this topic last week... Why export restrictions aren't the only thing to pay attention to in Nvidia's earnings Since word was that NVIDIA delivered 1,500 GB200 architectures in April (last month of Q1 FY2026), we should expect no less than 5,000 units projected for Q2. And if we hear something over 10,000 units, investors should be very pleased and bullish. The resoundingly bullish answer here came from Colette in her opening remarks... On average, major hyperscalers are each deploying nearly 1,000 NVL 72 racks -- or 72,000 Blackwell GPUs per week -- and are on track to further ramp output this quarter. Microsoft, for example, has already deployed tens of thousands of Blackwell GPUs and is expected to ramp to hundreds of thousands of GB200s with OpenAI as one of its key customers. Key learnings from the GB200 ramp will allow for a smooth transition to the next phase of our product road map, Blackwell Ultra. 1,000 GB200s per week and growing sounds like a very strong quarter indeed. With a solid 13 weeks in Q2, we could potentially see over 15,000 units shipped! That's very bullish. NVDA shares popped up to $144 in after-hours on that tidbit and look poised to open there this morning. I seen new highs above $150 soon because the company guide for only $45 billion this quarter is very conservative. By my math, they could do that in GB200 NVL72 alone: 15,000 x $3 million = $45 billion. Behold the AI Factory Last week after Jensen's keynote at Computex in Taiwan, I felt compelled to write an article explaining why he is using the phrase "AI factories" so much this year... NVIDIA AI Factories: More Than Clever Marketing? In Colette's opening remarks, she confirmed what I was seeing. Let me set the context first with this utterance: "Our customers' commitments are firm." She said this while talking about Datacenter revenue growth of 73% and "AI factory build outs... driving significant revenue." The reason I think this is important is because we are talking about significant capital expenditure for these companies, given the $3 million sticker. So it's natural for investors and analysts to wonder how sustainable this demand trend is. The reality is that the "hyperscalers" including Cloud Service Providers (CSPs) Microsoft Azure, Amazon AWS, Google Cloud, and Oracle Cloud -- plus Meta Platforms ( META ), Tesla, and OpenAI -- have insatiable demand for NVIDIA GPUs and advanced rack systems like the GB200 NVL72. They are all in a massive build-out phase that will last for two to three years. Just think how few companies are in the position of NVIDIA to say "our customers' commitments are firm" for such capex over multi-quarter periods. Even a single company, Tesla ( TSLA ), can make the argument for AI Factories because they need accelerated, hyperscale compute to train and operate FSD (full self-driving) cars, Grok AI and other xAI initiatives, and Optimus humanoid robots. Jensen believes robotics will be a multi-trillion-dollar industry. He also sums up the ramping demand in these few words: "Reasoning models are driving a step function surge in inference demand." For NVIDIA, this insatiable demand is like having government contracts galore! And we haven't even talked about sovereign AI adoption yet. Like the iPhone Cycle, But Better The next upgrade of Blackwell is going to start shipping this quarter too. Again from Colette... "Sampling of GB300 systems began earlier this month at the major CSPs, and we expect production shipments to commerce later this quarter." GB300 will leverage the same architecture, same physical footprint, and the same electrical and mechanical specifications as GB200. More importantly, the GB300 drop-in design will allow CSPs to seamlessly transition their systems and manufacturing used for GB200 while maintaining high yields in performance and memory. This has been a key element of the NVIDIA product roadmap and annual cadence: everything works together and nothing becomes obsolete. This is why I say that "it's like the iPhone cycle and Apple ecosystem -- but better." NVIDIA can sell better, faster, and more expensive GPUs to their customers every year because everything is seamless and has what I call "multiplicative integration." In other words, systems get better because it can all be upgraded with constantly improving and expanding CUDA software libraries. I first noticed this with the transition from Grace Hopper (GH) systems to Grace Blackwell (GB) last year. Colette: "For example, we increased the inference performance of Hopper by four times over two years. This is the benefit of NVIDIA's programmable CUDA architecture and rich ecosystem." AI Factories Aren't Just for ChatGPT In my AI factories article, I left out one customer that Jensen often talks about: the nation-state. He believes that every country will learn they need to control their own data, and not just for security reasons. In the AI economy, where there is knowledge and intelligence, there is potential wealth. So every country should be seeking to harness their data -- about their land and resources, their people, their economy, and their potential -- and be able to "mine and model" it for maximum value. Robotics companies and enterprises deploying agentic reasoning models already know the power of simulation and synthetic data training. Ironically, it was a car company, BMW, who first deployed NVIDIA Omniverse to help them design new factory operations using "digital twins." Now nation-states will begin experimenting with their data to solve their problems across land, resources, urban planning, agriculture, education, medicine, science, transportation, materials, and supply chains -- all to make the lives of their people better and lift more out of poverty. From Colette's remarks... And more AI factory projects are starting across industries and geographies. NVIDIA's full stack architecture is underpinning AI factory deployments as industry leaders like AT&T, BYD, Capital One, Foxconn, MediaTek, and Telenor are strategically vital sovereign clouds like those recently announced in Saudi Arabia, Taiwan, and The UAE. We have a line of sight to projects requiring tens of gigawatts of NVIDIA AI infrastructure in the not too distant future. She also noted the pace and scale of AI factory deployments with nearly 100 NVIDIA powered AI factories taking off this quarter, a two-fold increase vs. last year with the average number of GPUs powering each factory also doubling in the same period. Jensen: Why We Need to Sell to China About 2/3 of Jensen's opening remarks were about export controls and China AI development. "On export control, China is one of the world's largest AI markets and a springboard to global success. With half of the world's AI researchers based there, the platform that wins China is positioned to lead globally." With the current White House policy, the $50 billion China market is effectively closed to US industry. I think Jensen's views are very important here and it's a nuance you don't hear most investors and analysts discuss. Here's my summary: The US weakens its position with China by withholding technology because it forces them to develop all their own -- which they will -- instead of allowing them to become dependent on our infrastructure standards and leadership. Now here are four key paragraphs from Jensen in his own words... China's AI moves on with or without US chips. It has to compute to train and deploy advanced models. The question is not whether China will have AI. It already does. The question is whether one of the world's largest AI markets will run on American platforms. Shielding Chinese chipmakers from US competition only strengthens them abroad and weakens America's position. Export restrictions have spurred China's innovation and scale. The AI race is not just about chips. It's about which stack the world runs on. As that stack grows to include 6G and quantum, US global infrastructure leadership is at stake. The US has based its policy on the assumption that China cannot make AI chips. That assumption was always questionable, and now it's clearly wrong. China has enormous manufacturing capability. In the end, the platform that wins the AI developers, wins AI. Export controls should strengthen US platforms, not drive half of the world's AI talent to rivals. US platforms must remain the preferred platform for open source AI. That means supporting collaboration with top developers globally, including in China. America wins when models like DeepSeek and Qwen runs best on American infrastructure. When popular models are trained and optimized on US platforms, it drives usage, feedback, and continuous improvement, reinforcing American leadership across the stack. (end of excerpts from Jensen's opening remarks) With NVDA shares looking for a brighter open this morning above $140, I would look for any initial selling as a buying opportunity. It's clear that the demand trends will push the company into many successive quarters of $50 billion in sales, starting with this one. As I projected at the end of last year, NVIDIA is on a path to $500 billion in annual revenues over the next five years. Now that you understand the "AI factory" infrastructure concept, this is not a stretch. It's actually only 38% compound annual growth for 5 years. That's very doable in the AI Economy where NVIDIA is the premier provider of its picks and shovels (GPU innovations and networking hardware) and its brains and dreams (CUDA, Omniverse, and Cosmos). And thus my longer-term forecast for NVIDIA to become one of the few companies with $1 trillion in sales sometime in the 2030s. That will probably make it the first $5 trillion company by market cap sometime before that decade begins. Zacks' Research Chief Names "Stock Most Likely to Double" Our team of experts has just released the 5 stocks with the greatest probability of gaining +100% or more in the coming months. Of those 5, Director of Research Sheraz Mian highlights the one stock set to climb highest. This top pick is among the most innovative financial firms. With a fast-growing customer base (already 50+ million) and a diverse set of cutting edge solutions, this stock is poised for big gains. Of course, all our elite picks aren't winners but this one could far surpass earlier Zacks' Stocks Set to Double like Nano-X Imaging which shot up +129.6% in little more than 9 months. Free: See Our Top Stock And 4 Runners Up Meta Platforms, Inc. (META): Free Stock Analysis Report

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store