
Nvidia chips make gains in training largest AI systems, new data shows
Nvidia
's newest chips have made gains in training large artificial intelligence systems, new data released on Wednesday showed, with the number of chips required to train
large language models
dropping dramatically.
MLCommons, a nonprofit group that publishes benchmark performance results for AI systems, released new data about chips from Nvidia and Advanced Micro Devices, among others, for training, in which AI systems are fed large amounts of data to learn from. While much of the stock market's attention has shifted to a larger market for
AI inference
, in which AI systems handle questions from users, the number of chips needed to train the systems is still a key competitive concern. China's DeepSeek claims to create a competitive chatbot using far fewer chips than U.S. rivals.
The results were the first that MLCommons has released about how chips fared at training AI systems such as Llama 3.1 405B, an open-source AI model released by Meta Platforms that has a large enough number of what are known as "parameters" to give an indication of how the chips would perform at some of the most complex training tasks in the world, which can involve trillions of parameters.
Nvidia and its partners were the only entrants that submitted data about training that large model, and the data showed that Nvidia's new
Blackwell chips
are, on a per-chip basis, more than twice as fast as the previous generation of Hopper chips.
In the fastest results for Nvidia's new chips, 2,496 Blackwell chips completed the training test in 27 minutes. It took more than three times that many of Nvidia's previous generation of chips to get a faster time, according to the data.
In a press conference, Chetan Kapoor, chief product officer for CoreWeave, which collaborated with Nvidia to produce some of the results, said there has been a trend in the AI industry toward stringing together smaller groups of chips into subsystems for separate AI training tasks, rather than creating homogenous groups of 100,000 chips or more.
"Using a methodology like that, they're able to continue to accelerate or reduce the time to train some of these crazy, multi-trillion parameter model sizes," Kapoor said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
an hour ago
- Mint
Broadcom, the $1 trillion stock you shouldn't ignore, makes AI investment case
Broadcom shares traded lower Friday following a mixed set of fiscal second-quarter earnings that should still underscore the semiconductor and software company's compelling position in the artificial-intelligence-investment race. Broadcom, a tech group valued at just $125 billion five years ago, is now one of three key AI suppliers. Broadcom, with a market cap of more than $1.2 trillion, trails Nvidia in terms of market value, which now has a market cap of $3.42 trillion. But Broadcom also is almost seven times larger than Advanced Micro Devices. That's in large part related to the company's standout offering of application-specific integrated circuits, also known as ASIC chips. The specialized semiconductors help hyperscalers like Apple, Alphabet's Google, and Meta Platforms build out their massive AI models. China-based ByteDance, the owner of short-form video app TikTok, also is a Broadcom client. ASIC chips allow for the movement of information through these model networks, which Broadcom also helps construct, by easing traffic congestion and boosting speed and reliability. That allowed Broadcom to grow its AI-related revenue by around 46% from last year, with a second quarter tally of $4.4 billion. The AI growth rate, in fact, was more than double the 20% advance for overall revenue, which came in just ahead of Wall Street forecasts of $15 billion. Broadcom CEO Hock Tan sees more gains ahead, especially as hyperscalers transition from building their massive AI networks to training them to make decisions and perform tasks for their end customers. By 2027, Broadcom has said, the serviceable addressable market, or SAM, for AI processors and networking chips likely will rise to between $60 billion and $90 billion. 'Our partners are still unwavering in their plan to invest despite the uncertain economic environment," Tan told analysts on a conference call late Thursday. 'In fact, what we've seen recently is that they are doubling down on inference to monetize their platforms." Tan sees current-quarter revenue in the region of $15.8 billion, a 21% advance from last year, a tally that was only modestly firmer than Wall Street forecasts. AI revenue is predicted to rise 60% to $5.1 billion. 'The grade of growth we are seeing in 2025 will sustain into 2026, based on improved visibility and the fact that we're seeing inference coming in on top of the demand for training as the clusters get built up again," Tan said. Shares in Broadcom, which have soared 85% over the past 12 months and have risen 12% on the year, compared with gains of 16% and 4.2%, respectively, for larger rival Nvidia, were down 3% in premarket trading Friday at $252.16. That could suggest some profit-taking from the second-quarter update, which only just matched Wall Street's lofty growth forecasts. Analysts, however, have started to lift their long-term price targets on the back of Broadcom's compelling position in the broader AI investment narrative. Morgan Stanley analyst Joseph Moore lifted his price target by $10, taking it to $270 a share, while BofA Securities analyst Vivek Arya lifted his by $60 to $300. Benchmark analyst Cody Acree raised his Broadcom price target by $60 to $315 a share. 'With the company's expected continue growth in its AI businesses, we believe Broadcom is extremely well-positioned to capitalize on what we expect to be improving industry fundamentals over both the short and long-term, with the company uniquely situated to reap the benefits of the macro industry trend toward growing usage of custom XPU silicon to more efficiently drive customer-specific workloads, with accelerating growth in inferencing applications," Acree wrote in a research note.


Time of India
2 hours ago
- Time of India
AP signs MoU with NVIDIA to establish AI univ in Amaravati
Vijayawada: The Andhra Pradesh govt on Friday signed an MoU with the world's largest AI chip maker, NVIDIA, for the establishment of the Artificial Intelligence (AI) University in Amaravati. Tired of too many ads? go ad free now The MoU aims at infrastructure development, startup acceleration, skilling, and encouraging research. As part of this agreement, NVIDIA will provide skill training to 10,000 students across the state in the next two years. The chip maker will also offer curriculum guidance and technical training resources to support AI education and capacity building in engineering colleges across the state. In addition to workforce development, NVIDIA will support the identification and establishment of AI research centres that will address pressing technological challenges and develop transformative solutions across sectors. "This partnership with NVIDIA marks a decisive step in our vision to position Andhra Pradesh as a national leader in artificial intelligence. By equipping 10,000 students with cutting-edge AI skills and supporting our startup ecosystem, we are laying the foundation for a future-ready economy driven by innovation, research, and entrepreneurship," said Nara Lokesh, minister for ITE&C and HRD. The collaboration will further extend to the development of cutting-edge computational infrastructure required for the proposed AI University. NVIDIA will assist in identifying the necessary tools, software platforms, and hardware capabilities to ensure the university is equipped to deliver world-class education and research outcomes. "We are proud to collaborate with the govt of Andhra Pradesh in building a strong and inclusive AI ecosystem. This initiative reflects our commitment to democratising access to AI education, accelerating research, and enabling startups to innovate at scale," said Vishal Dhupar, Managing Director, Asia South, NVIDIA. Tired of too many ads? go ad free now Another key aspect of the MoU is the sharing of experience and best practices in establishing next-generation AI Factories. NVIDIA will provide insights from its global expertise in operationalising AI Factories that serve as hubs for innovation, industry collaboration, and talent incubation aimed at the democratisation of AI. NVIDIA will also facilitate up to 500 AI startups from the state for its inception program during the MoU period, subject to fulfilling the eligibility criteria. This initiative is expected to create a significant boost to the startup ecosystem.


Economic Times
5 hours ago
- Economic Times
Nvidia sounds the alarm: Chinese AI talent defecting to Huawei as U.S. chip curbs push them out the door
Nvidia is sounding the alarm about the unintended impact of US export restrictions on sending chips to China, as the company's senior VP of research and chief scientist, Bill Dally, said that the chipmaker is now witnessing an increasing number of former Nvidia AI researchers joining Huawei, a move prompted primarily by the tightening export controls, as per a PC Gamer to Dally's calculation, the number of AI researchers working in China has grown from a third of the world's total in 2019 to nearly half at present, reported PC Gamer, which cited a translation from the Taiwan Economic Daily report. The AI chipmaker's rationale is that without US restrictions, Huawei wouldn't be forced to focus so much on domestic AI solutions, but now it must do so to keep up, according to the PC Gamer report. However, this is not the first time Nvidia is pointing out that the US export restrictions for China are harming the AI industry in America. Even during Computex last month, Nvidia CEO Jensen Huang said, "AI researchers are still doing AI research in China and if they don't have enough Nvidia, they will use their own [chips]," and he also spoke regarding Huawei specifically, saying the company has become "quite formidable", reported PC Gamer. While, it is not just the US national interest that has urged Nvidia to highlight all the possible negatives of export controls, as these restrictions have cost and will cost the chipmaker lots of money, according to the report. Nvidia had revealed that after billions of dollars lost due to the restrictions of its H20 chips to China in Q1, it's expecting another $8 billion to be lost for the same reason in Q2, reported PC Gamer. According to the report, Huawei's latest Ascend 910 and 920 chips, with the help of China's SMIC (Semiconductor Manufacturing International Corporation), would be a better option for Chinese AI companies than trying to get their hands on Nvidia chips, as per the report. Why is Nvidia concerned about its AI researchers joining Huawei? Because it signals that export restrictions might be pushing top talent and innovation into China, instead of slowing its progress. How much money has Nvidia lost from these restrictions? Nvidia says it lost billions in Q1 and expects another $8 billion in losses in Q2 due to blocked chip sales to China.