logo
#

Latest news with #NvidiaH100GPUs

Government gets bids offering 18,000 GPUs in round-2 of IndiaAI Mission tender
Government gets bids offering 18,000 GPUs in round-2 of IndiaAI Mission tender

Time of India

time13-05-2025

  • Business
  • Time of India

Government gets bids offering 18,000 GPUs in round-2 of IndiaAI Mission tender

The Centre has received bids proposing to offer 18,000 graphics processing units (GPUs) in the second round of the IndiaAI GPU tender, and expects 15,000 GPUs to finally be offered following technical qualifications and commercial bid-opening, IndiaAI CEO Abhishek Singh first round of the GPU tender has already concluded. In that round, New Delhi is offering 15,000 GPUs on subsidised rates to the country's startups, academics and research organisations. Sunil Gupta, cofounder and chief executive, Yotta Data Services, told ET that while the company had already offered 8,192 Nvidia H100 GPUs and 1,024 Nvidia L40S GPUs in first round of empanelment, it has now proposed Nvidia Blackwell B200s in this second round of empanelment. IndiaAI Mission has shortlisted seven companies—including partners of Amazon Web Service (AWS), Oracle and Google Cloud—for technical evaluation under the second round of the GPU tender, ET had reported on May 7. The Mission has invited the firms—Netmagic IT Services (now known as NTT Global Data Centers & Cloud Infrastructure India, or NTT GDC India), Cyfuture India, Sify Digital Services, Vensysco Technologies, Locuz Enterprise Solutions, Yotta Data Services, and Ishan Infotech—for technical presentation of their proposals on May 14. There are already more than 230 applications to India AI for building India-specific large language models (LLMs) / small language models (SLMs) and more than 30,000 GPU requests are in the pipeline. "The bid submission is now open for third continuous empanelment," Singh said, indicating that the third round of the GPU tender is now open for accepting bids. Some companies are planning to bid higher versions of GPUs in the third round. Piyush Somani, founding chief executive of Cloud company ESDS Software Solution told ET, "We are in the process of acquiring the Nvidia Blackwell series of GPUs -- B200 and B300. Once those are deployed in our data centres, we will be submitting the bid for the third round of the IndiaAI Mission's GPU tender which is now open." "We won't be participating with the older generation GPUs," Somani who is also president, Cloud Computing Innovation Council of India, added. Mumbai-headquartered Cloudstrats Technologies who had participated in the pre-bid meeting were interested in bidding for the second round of the tender and had requested IndiaAI for an extension. The tender inviting authority did not grant the extension. Cloudstrats is however going to participate in future empanelments, it said. The company told ET, 'With established partnerships with hyperscalers like AWS and Microsoft Azure, we are well-positioned and experienced to deliver scalable AI infrastructure covering high-performance compute, secure storage, and cloud-native services." India had formally launched its Rs 10,000-crore India AI Mission in January, under which against the target of 10,000 GPUs outlined in the IndiaAI compute pillar, empanelled bidders have offered 14,517 GPUs at L1 rates. As part of the Mission, the government is also incentivising the development of local language models built by academia and industry with investment capital and other support. The move is aimed at building up India's AI prowess.

Government gets bids offering 18,000 GPUs in Round-2 of IndiaAI Mission tender
Government gets bids offering 18,000 GPUs in Round-2 of IndiaAI Mission tender

Time of India

time13-05-2025

  • Business
  • Time of India

Government gets bids offering 18,000 GPUs in Round-2 of IndiaAI Mission tender

ETtech Live Events The Centre has received bids proposing to offer 18,000 graphics processing units (GPUs) in the second round of the IndiaAI GPU tender , and expects 15,000 GPUs to finally be offered following technical qualifications and commercial bid-opening, IndiaAI CEO Abhishek Singh first round of the GPU tender has already concluded. In that round, New Delhi is offering 15,000 GPUs on subsidised rates to the country's startups, academics and research Gupta, cofounder and chief executive, Yotta Data Services, told ET that while the company had already offered 8,192 Nvidia H100 GPUs and 1,024 Nvidia L40S GPUs in first round of empanelment, it has now proposed Nvidia Blackwell B200s in this second round of empanelment. IndiaAI Mission has shortlisted seven companies—including partners of Amazon Web Service (AWS), Oracle and Google Cloud—for technical evaluation under the second round of the GPU tender, ET had reported on May Mission has invited the firms—Netmagic IT Services (now known as NTT Global Data Centers & Cloud Infrastructure India, or NTT GDC India), Cyfuture India, Sify Digital Services, Vensysco Technologies, Locuz Enterprise Solutions, Yotta Data Services, and Ishan Infotech—for technical presentation of their proposals on May are already more than 230 applications to India AI for building India-specific large language models (LLMs) / small language models (SLMs) and more than 30,000 GPU requests are in the pipeline."The bid submission is now open for third continuous empanelment," Singh said, indicating that the third round of the GPU tender is now open for accepting companies are planning to bid higher versions of GPUs in the third Somani, founding chief executive of Cloud company ESDS Software Solution told ET, "We are in the process of acquiring the Nvidia Blackwell series of GPUs -- B200 and B300. Once those are deployed in our data centres, we will be submitting the bid for the third round of the IndiaAI Mission's GPU tender which is now open.""We won't be participating with the older generation GPUs," Somani who is also president, Cloud Computing Innovation Council of India, Cloudstrats Technologies who had participated in the pre-bid meeting were interested in bidding for the second round of the tender and had requested IndiaAI for an tender inviting authority did not grant the extension. Cloudstrats is however going to participate in future empanelments, it company told ET, 'With established partnerships with hyperscalers like AWS and Microsoft Azure, we are well-positioned and experienced to deliver scalable AI infrastructure covering high-performance compute, secure storage, and cloud-native services."India had formally launched its Rs 10,000-crore India AI Mission in January, under which against the target of 10,000 GPUs outlined in the IndiaAI compute pillar, empanelled bidders have offered 14,517 GPUs at L1 part of the Mission, the government is also incentivising the development of local language models built by academia and industry with investment capital and other support. The move is aimed at building up India's AI prowess.

Nvidia Benchmark Recipes Bring Deep Insights In AI Performance
Nvidia Benchmark Recipes Bring Deep Insights In AI Performance

Forbes

time20-03-2025

  • Business
  • Forbes

Nvidia Benchmark Recipes Bring Deep Insights In AI Performance

Nvidia DGX Cloud Data Center As AI workloads and accelerated applications grow in sophistication and complexity, businesses and developers need better tools to assess their infrastructure's ability to handle the demands of both training and inference more efficiently. To that end, Nvidia has been working on a set of performance testing tools, called DGX Cloud Benchmark Recipes, that are designed to help organizations evaluate how their hardware and cloud infrastructure perform when running the most advanced AI models available today. Our team at HotTech had a chance to kick the tires on a few of these recipes recently, and found the data they can capture to be extremely insightful. Nvidia's toolkit also offers a database and calculator of performance results for GPU-compute workloads on various configurations, including the number of Nvidia H100 GPUs and cloud service providers, while the recipes allow businesses to run realistic performance evaluations on their own infrastructure. The results can help guide decisions on whether to invest in more powerful hardware, cloud provider service levels, or tweak configurations to better meet machine learning demands. These tools also take a holistic approach that incorporates network technologies for optimal throughput. Nvidia DGX Cloud Benchmarking Recipes are a set of pre-configured containers and scripts that users can download and run on their own infrastructure. These containers are optimized for testing the performance of various AI models under different configurations, making them very valuable for companies looking to benchmark systems, whether on prem or in the cloud, before committing to larger-scale AI workloads or infrastructure deployments. Nvidia DGX Cloud Time To Train Performance Report In addition to offering static performance data, time to train and efficiency calculated from its database, Nvidia has recipes readily available for download that let businesses run real-world tests on their own hardware or cloud infrastructure, helping them understand the performance impact of different configurations. The recipes include benchmarks for training models like Meta's Llama 3.1 and Nvidia's own Llama 3.1 branch, called Nemotron, across several cloud providers (AWS, Google Cloud, and Azure), with options for adjusting factors like model size, GPU usage, and precision. The database is broad enough to cover popular AI models, but it is primarily designed for testing large-scale pre-training tasks, rather than inference on smaller models. The benchmarking process also allows for flexibility. Users can tailor the tests to their specific infrastructure by adjusting parameters such as the number of GPUs and the size of the model being trained. The default hardware configuration in Nvidia's database of results uses the company's high-end H100 80GB GPUs, but it is designed to be adaptable. Although currently, it does not include consumer or prosumer-grade GPUs (e.g., RTX A4000 or RTX 50) or the company's latest Blackwell GPU family, these options could be added in the future. Running the DGX Cloud Benchmarking Recipes is straightforward, assuming a few prerequisites are met. The process is well-documented, with clear instructions on setting up, running the benchmarks, and interpreting the results. Once a benchmark is completed, users can review the performance data, which includes key metrics like training time, GPU usage, and throughput. This allows businesses to make data-driven decisions about which configurations deliver the best performance and efficiency for their AI workloads. This could also go a long way in helping companies maintain green initiatives in terms of meeting power consumption and efficiency budgets. While the DGX Cloud Benchmarking Recipes offer valuable insights, there are a few areas where Nvidia's tools could be expanded. First, benchmarking recipes are currently focused primarily on pre-training large models, not on real-time inference performance. Inference tasks, such as token generation or running smaller AI models, are equally important in many business applications. Expanding the toolset to include more detailed inference benchmarks would provide a fuller picture of how different hardware configurations handle these real-time demands. Additionally, by expanding the recipe selection to include lower-end or even higher-end GPUs (like Blackwell or even competitive offerings), Nvidia could cater to a broader audience, particularly businesses that don't require the massive compute power of a Hopper H100 80GB cluster for every workload. Regardless, Nvidia's new DGX Cloud Benchmarking Recipes look like a very helpful resource for evaluating the performance of AI compute infrastructure, before making major investment decisions. They offer a practical way to understand how different configurations—whether cloud-based or on-premises—handle complex AI workloads. This is especially valuable for organizations exploring which cloud provider best meets their needs, or if the company is looking for new ways to optimize existing infrastructure. As AI's role in business and our everyday lives continues to grow, tools like this will become essential for guiding infrastructure decisions, balancing performance versus cost and power consumption, and optimizing AI applications to meet real-world demands. As Nvidia expands these recipes to include more inference-focused benchmarks and potentially expands its reference data with a wider range of GPU options, these tools could become even more indispensable to businesses and developers of all sizes.

Nvidia Stock Investors Just Got Bad News From DeepSeek, but Certain Wall Street Analysts See a Silver Lining
Nvidia Stock Investors Just Got Bad News From DeepSeek, but Certain Wall Street Analysts See a Silver Lining

Yahoo

time29-01-2025

  • Business
  • Yahoo

Nvidia Stock Investors Just Got Bad News From DeepSeek, but Certain Wall Street Analysts See a Silver Lining

Nvidia (NASDAQ: NVDA) made stock market history on Monday, Jan. 27, but not the good kind. The chipmaker saw its share price decline 17%, due to concerns about an artificial intelligence (AI) model from Chinese start-up DeepSeek. That nosedive erased $589 billion of its market value, the largest single-day loss for any company on record. What triggered the meltdown? Despite regulations from the U.S. government that prohibit Nvidia from exporting its most advanced AI chips to China, DeepSeek reportedly created a large language model that rivals the performance of the more sophisticated models created in the U.S. The company also claims it trained the model while spending much less money and without the most advanced Nvidia chips. That news has been disastrous for Nvidia shareholders, given how sharply the stock crashed. But many Wall Street analysts see the sell-off as an overreaction that creates a long-awaited buying opportunity for investors. DeepSeek published a research paper last week claiming its R1 reasoning model rivals the performance of OpenAI's o1 problem-solving model on certain benchmarks. The Chinese start-up also claims it spent less than $6 million training the large language model and says it completed the training with only 2,048 Nvidia H800 graphics processing units (GPUs). Importantly, the H800 GPU was designed specifically to comply with export restrictions. Comparatively, OpenAI spent more than $100 million training its GPT-4 model and used the more powerful Nvidia H100 GPUs. The company hasn't disclosed the precise number, but analysts estimate OpenAI used over 10,000 processors to train GPT-4. That estimate is plausible, given that Meta Platforms used 16,000 Nvidia H100 GPUs to train its Llama 3 model, spending an estimated $60 million. The implications are alarming for Nvidia. If DeepSeek trained R1 using fewer, less-powerful chips, then U.S. companies could theoretically reduce spending by mimicking the training techniques employed by the Chinese AI start-up. In turn, hyperscale companies like Amazon, Alphabet, Meta Platforms, and Microsoft could spend less than previously anticipated on Nvidia GPUs in the coming years. While DeepSeek trained its R1 model with impressive efficiency, many analysts view that as a positive development. They think it will accelerate the pace at which artificial intelligence is adopted, driving greater demand for Nvidia GPUs. Also, some industry experts question the validity of DeepSeek's claims concerning costs and infrastructure. Alexandr Wang, CEO of Scale AI, believes DeepSeek has 50,000 Nvidia H100 GPUs but didn't discuss the chips in its research paper because they violate U.S. export controls. The H100 is the second-most-powerful Nvidia GPU widely available today and is very expensive. If DeepSeek does have that many H100s, they likely understated the model-training costs. Brian Colello at Morningstar maintained his target price of $130 per share on Nvidia stock following the DeepSeek news. "We doubt the leading cloud vendors and AI builders will pause their plans," he wrote in a note to clients. "We still think tech firms will continue to buy all the GPUs they can as part this AI gold rush." Dan Ives at Wedbush Securities says the stock market was "way wrong" in its reaction to the news. Based on discussions with about 25 technology companies, Ives thinks AI spending in the U.S. will be unaffected by the recent report from DeepSeek. He argues that cheaper training will accelerate AI adoption and sees the sell-off as one of the best buying opportunities in the last decade. Wedbush kept its target price on Nvidia at $175 per share. Patrick Moorhead at Moor Insights & Strategy believes demand for Nvidia GPUs will keep increasing until companies achieve artificial general intelligence (AGI). He also thinks the novel training methods used by DeepSeek will accelerate AI adoption by reducing costs. Moorhead sees that as positive because it will create new use cases in less time. Stacy Rasgon at Bernstein thinks DeepSeek understated the training costs associated with its R1 model. However, he sees its efficient training methods as a good thing. During a CNBC interview, Rasgon said the situation is an example of the Jevons Paradox, an economic principle in which declining costs for a new technology are more than offset by the subsequent increase in demand. Bernstein maintained its target price on Nvidia of $175 per share. Tom Lee at Fundstrat Global Advisors called the stock market drawdown an overreaction and compared the situation to the sell-off that followed the onset of COVID-19. "This pullback in Nvidia will prove to be a buying opportunity," he told CNBC. Investors should remember that the DeepSeek situation is evolving rapidly and analysts may change their opinions. Meta Platforms and Microsoft are scheduled to report financial results on Jan. 29, followed by Alphabet and Amazon on Feb. 4. The market may get more clarity when the management teams at those companies host their quarterly earnings calls. However, several analysts currently believe DeepSeek's breakthrough will have little impact on long-term demand for Nvidia GPUs. In that sense, while the news has been disastrous for Nvidia shareholders, it has a silver lining: Nvidia stock is much cheaper today than it was last week, which creates a compelling buying opportunity for prospective investors. Indeed, among the 39 analysts who follow the company -- eight of whom updated their forecast in the last two days -- Nvidia has an average target price of $177 per share. That implies 50% upside from its current share price of $118. Before you buy stock in Nvidia, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Nvidia wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $725,740!* Now, it's worth noting Stock Advisor's total average return is 894% — a market-crushing outperformance compared to 175% for the S&P 500. Don't miss out on the latest top 10 list. Learn more » *Stock Advisor returns as of January 27, 2025 Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Trevor Jennewine has positions in Amazon and Nvidia. The Motley Fool has positions in and recommends Alphabet, Amazon, Meta Platforms, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy. Nvidia Stock Investors Just Got Bad News From DeepSeek, but Certain Wall Street Analysts See a Silver Lining was originally published by The Motley Fool Sign in to access your portfolio

Scale AI CEO Warns China's AI Advancements Bolstered by Nvidia GPUs
Scale AI CEO Warns China's AI Advancements Bolstered by Nvidia GPUs

Yahoo

time27-01-2025

  • Business
  • Yahoo

Scale AI CEO Warns China's AI Advancements Bolstered by Nvidia GPUs

Alexandr Wang, CEO of Scale AI, stated on Thursday in a CNBC interview that China's rapid advancements in artificial intelligence are significantly supported by its substantial holdings of Nvidia's (NVDA, Financials) Nvidia H100 GPUs, intensifying the competition with the United States. Warning! GuruFocus has detected 4 Warning Signs with NVDA. Speaking at the World Economic Forum in Davos, Switzerland, Wang underlined that DeepSeek, a top Chinese AI lab, published a novel model on Christmas Day and then DeepSeek-R1, a reasoning-oriented AI model directly rival to OpenAI's newly published o1 model. Emphasizing that China has a somewhat bigger number of Nvidia H100 GPUs, which are essential for constructing sophisticated AI models, Wang defined the U.S.-China competition in artificial intelligence as an "AI war". He noted that China has had strong access to these potent artificial intelligence processors in spite of U.S. export restrictions, therefore allowing notable advancement in its AI industry. Thanks to this access, Chinese artificial intelligence models have been able to reach performance standards either on par with or above the top American models, claims Wang. Moreover, Wang anticipated that, in line with expectations for the generative AI market, the AI sector would value $1 trillion during the next decade. He also voiced his conviction that two to four yearsa period of time still hotly contested among AI professionalsare enough to reach artificial general intelligence. Wang underlined the need of the United States expanding its computing capability and infrastructure if it is to maintain its leadership on the global scene. Emphasizing the need of major investment and development to sustain ongoing growth and innovation in the American AI industry, he said, "We need to unleash U.S. energy to enable this AI boom." This article first appeared on GuruFocus. Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store