logo
#

Latest news with #ChetanKapoor

Nvidia Stock (NVDA) Takes Battle to DeepSeek as AI Chips Go Full Speed Ahead
Nvidia Stock (NVDA) Takes Battle to DeepSeek as AI Chips Go Full Speed Ahead

Business Insider

time3 days ago

  • Business
  • Business Insider

Nvidia Stock (NVDA) Takes Battle to DeepSeek as AI Chips Go Full Speed Ahead

Semiconductor giant Nvidia (NVDA) is battling back against its Chinese rivals by becoming more efficient at training AI models, according to new data. Confident Investing Starts Here: Chips Ahead MLCommons, a nonprofit group, has released new data showing that Nvidia's newest type of chips have made AI training gains. The number of chips required to do the jobs has apparently dropped dramatically. The data shows how chips fared at training AI systems such as Llama 3.1 405B, an open-source AI model released by Meta Platforms (META) that has a large enough number of what are known as 'parameters'. These give an indication of how the chips would perform at some of the most complex training tasks in the world. The data showed that Nvidia's new Blackwell chips are, on a per-chip basis, more than twice as fast as the previous generation of Hopper chips. In the fastest results for Nvidia's new chips, 2,496 Blackwell chips completed the training test in 27 minutes. It took more than three times that many of Nvidia's previous generation of chips to get a faster time. China Challenge This puts Nvidia in a prime position to challenge rivals such as China's DeepSeek, which claims to use far fewer chips than U.S. rivals. Chetan Kapoor, chief product officer for CoreWeave (CRWV), which worked with Nvidia to produce some of the results, said there has been a trend in the AI industry toward stringing together smaller groups of chips into subsystems for separate AI training tasks, rather than creating homogeneous groups of 100,000 chips or more. 'Using a methodology like that, they're able to continue to accelerate or reduce the time to train some of these crazy, multi-trillion parameter model sizes,' Kapoor said.

Nvidia chips make gains in training largest AI systems, new data shows
Nvidia chips make gains in training largest AI systems, new data shows

Time of India

time3 days ago

  • Business
  • Time of India

Nvidia chips make gains in training largest AI systems, new data shows

Nvidia 's newest chips have made gains in training large artificial intelligence systems, new data released on Wednesday showed, with the number of chips required to train large language models dropping dramatically. MLCommons, a nonprofit group that publishes benchmark performance results for AI systems, released new data about chips from Nvidia and Advanced Micro Devices, among others, for training, in which AI systems are fed large amounts of data to learn from. While much of the stock market's attention has shifted to a larger market for AI inference , in which AI systems handle questions from users, the number of chips needed to train the systems is still a key competitive concern. China's DeepSeek claims to create a competitive chatbot using far fewer chips than U.S. rivals. The results were the first that MLCommons has released about how chips fared at training AI systems such as Llama 3.1 405B, an open-source AI model released by Meta Platforms that has a large enough number of what are known as "parameters" to give an indication of how the chips would perform at some of the most complex training tasks in the world, which can involve trillions of parameters. Nvidia and its partners were the only entrants that submitted data about training that large model, and the data showed that Nvidia's new Blackwell chips are, on a per-chip basis, more than twice as fast as the previous generation of Hopper chips. In the fastest results for Nvidia's new chips, 2,496 Blackwell chips completed the training test in 27 minutes. It took more than three times that many of Nvidia's previous generation of chips to get a faster time, according to the data. In a press conference, Chetan Kapoor, chief product officer for CoreWeave, which collaborated with Nvidia to produce some of the results, said there has been a trend in the AI industry toward stringing together smaller groups of chips into subsystems for separate AI training tasks, rather than creating homogenous groups of 100,000 chips or more. "Using a methodology like that, they're able to continue to accelerate or reduce the time to train some of these crazy, multi-trillion parameter model sizes," Kapoor said.

Nvidia chips make gains in training largest AI systems, new data shows
Nvidia chips make gains in training largest AI systems, new data shows

The Hindu

time3 days ago

  • Business
  • The Hindu

Nvidia chips make gains in training largest AI systems, new data shows

Nvidia's newest chips have made gains in training large artificial intelligence systems, new data released on Wednesday showed, with the number of chips required to train large language models dropping dramatically. MLCommons, a nonprofit group that publishes benchmark performance results for AI systems, released new data about chips from Nvidia and Advanced Micro Devices, among others, for training, in which AI systems are fed large amounts of data to learn from. While much of the stock market's attention has shifted to a larger market for AI inference, in which AI systems handle questions from users, the number of chips needed to train the systems is still a key competitive concern. China's DeepSeek claims to create a competitive chatbot using far fewer chips than U.S. rivals. The results were the first that MLCommons has released about how chips fared at training AI systems such as Llama 3.1 405B, an open-source AI model released by Meta Platforms that has a large enough number of what are known as "parameters" to give an indication of how the chips would perform at some of the most complex training tasks in the world, which can involve trillions of parameters. Nvidia and its partners were the only entrants that submitted data about training that large model, and the data showed that Nvidia's new Blackwell chips are, on a per-chip basis, more than twice as fast as the previous generation of Hopper chips. In the fastest results for Nvidia's new chips, 2,496 Blackwell chips completed the training test in 27 minutes. It took more than three times that many of Nvidia's previous generation of chips to get a faster time, according to the data. In a press conference, Chetan Kapoor, chief product officer for CoreWeave, which collaborated with Nvidia to produce some of the results, said there has been a trend in the AI industry toward stringing together smaller groups of chips into subsystems for separate AI training tasks, rather than creating homogenous groups of 100,000 chips or more. "Using a methodology like that, they're able to continue to accelerate or reduce the time to train some of these crazy, multi-trillion parameter model sizes," Kapoor said.

Nvidia chips make gains in training largest AI systems, new data shows
Nvidia chips make gains in training largest AI systems, new data shows

Indian Express

time3 days ago

  • Business
  • Indian Express

Nvidia chips make gains in training largest AI systems, new data shows

Nvidia's newest chips have made gains in training large artificial intelligence systems, new data released on Wednesday showed, with the number of chips required to train large language models dropping dramatically. MLCommons, a nonprofit group that publishes benchmark performance results for AI systems, released new data about chips from Nvidia and Advanced Micro Devices, among others, for training, in which AI systems are fed large amounts of data to learn from. While much of the stock market's attention has shifted to a larger market for AI inference, in which AI systems handle questions from users, the number of chips needed to train the systems is still a key competitive concern. China's DeepSeek claims to create a competitive chatbot using far fewer chips than U.S. rivals. The results were the first that MLCommons has released about how chips fared at training AI systems such as Llama 3.1 405B, an open-source AI model released by Meta Platforms that has a large enough number of what are known as 'parameters' to give an indication of how the chips would perform at some of the most complex training tasks in the world, which can involve trillions of parameters. Nvidia and its partners were the only entrants that submitted data about training that large model, and the data showed that Nvidia's new Blackwell chips are, on a per-chip basis, more than twice as fast as the previous generation of Hopper chips. In the fastest results for Nvidia's new chips, 2,496 Blackwell chips completed the training test in 27 minutes. It took more than three times that many of Nvidia's previous generation of chips to get a faster time, according to the data. In a press conference, Chetan Kapoor, chief product officer for CoreWeave, which collaborated with Nvidia to produce some of the results, said there has been a trend in the AI industry toward stringing together smaller groups of chips into subsystems for separate AI training tasks, rather than creating homogenous groups of 100,000 chips or more. 'Using a methodology like that, they're able to continue to accelerate or reduce the time to train some of these crazy, multi-trillion parameter model sizes,' Kapoor said.

Nvidia chips make gains in training largest AI systems, new data shows
Nvidia chips make gains in training largest AI systems, new data shows

Time of India

time3 days ago

  • Business
  • Time of India

Nvidia chips make gains in training largest AI systems, new data shows

Nvidia 's newest chips have made gains in training large artificial intelligence systems, new data released on Wednesday showed, with the number of chips required to train large language models dropping dramatically. MLCommons, a nonprofit group that publishes benchmark performance results for AI systems, released new data about chips from Nvidia and Advanced Micro Devices, among others, for training, in which AI systems are fed large amounts of data to learn from. While much of the stock market's attention has shifted to a larger market for AI inference , in which AI systems handle questions from users, the number of chips needed to train the systems is still a key competitive concern. China's DeepSeek claims to create a competitive chatbot using far fewer chips than U.S. rivals. The results were the first that MLCommons has released about how chips fared at training AI systems such as Llama 3.1 405B, an open-source AI model released by Meta Platforms that has a large enough number of what are known as "parameters" to give an indication of how the chips would perform at some of the most complex training tasks in the world, which can involve trillions of parameters. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Access all TV channels anywhere, anytime Techno Mag Learn More Undo Nvidia and its partners were the only entrants that submitted data about training that large model, and the data showed that Nvidia's new Blackwell chips are, on a per-chip basis, more than twice as fast as the previous generation of Hopper chips. In the fastest results for Nvidia's new chips, 2,496 Blackwell chips completed the training test in 27 minutes. It took more than three times that many of Nvidia's previous generation of chips to get a faster time, according to the data. Live Events In a press conference, Chetan Kapoor, chief product officer for CoreWeave, which collaborated with Nvidia to produce some of the results, said there has been a trend in the AI industry toward stringing together smaller groups of chips into subsystems for separate AI training tasks, rather than creating homogenous groups of 100,000 chips or more. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories "Using a methodology like that, they're able to continue to accelerate or reduce the time to train some of these crazy, multi-trillion parameter model sizes," Kapoor said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store