Latest news with #DeepSeek-R1671B

Business Insider
7 days ago
- Business
- Business Insider
China's military tried to buy Nvidia chips for AI servers and a robot dog, documents show
The Chinese military wants to use Nvidia AI chips in a wide range of projects, from servers running Chinese startup DeepSeek's most powerful model to a 33-pound "robot dog" with a high-definition camera, according to documents reviewed by Business Insider. BI reviewed records on the Chinese People's Liberation Army (PLA)'s official procurement portal, where military units post thousands of requests for equipment that local contractors can bid on. Multiple requests over the past year reference both banned and permitted Nvidia hardware, including the H20 chip, on which the US is set to loosen export controls—a move that has sent Nvidia's stock soaring. An Nvidia spokesperson told BI that China has "more than enough" domestic chips for all its military applications. "Buying a handful of older products to test the US competition is not a national security concern," the spokesperson said. "Using restricted products for military applications would be a nonstarter, without support, software, or maintenance." While the Chinese military tries to buy Nvidia's hardware, the chipmaker faces pressure from other parts of the Chinese government, which has long supported efforts to build a domestic alternative. On Thursday, China's internet regulator said it had summoned the company to explain alleged backdoor security risks in its H20 chips, claims that Nvidia has denied, Reuters reported. How the Chinese army wants to use Nvidia chips The military projects reviewed by BI are typically for servers to run AI models, like those built by DeepSeek, for unspecified purposes. Three requests published this April call for banned Nvidia chips to power such servers. One requests at least eight H20 cards for a system capable of running one of DeepSeek's largest and most powerful models, DeepSeek-R1 671B, at full capacity. Another, tied to an "intelligent decision-making" support system that can also run DeepSeek, calls for four RTX 6000 graphics cards—also banned. Although the contract hasn't been finalized, a supplier was provisionally selected in June. DeepSeek didn't respond to requests for comment. Another server request from the same month asks for an H100 graphics card— banned from export to China since 2022—which can cost tens of thousands of dollars apiece. The listing specifies that the cards must be delivered in their original packaging and installed on-site. Another AI server request published in July 2024 also calls for H100 GPUs. The Chinese military isn't only calling for AI servers. One project from April, for example, shows a military unit requesting a 33-pound "robot dog" with an Nvidia Jetson computing module as part of an unspecified training project. This particular request was later canceled, and Jetson modules are not barred from being exported to China for most use cases. It's unclear whether the Chinese military actually obtained these chips. Ryan Fedasiuk, a former State Department advisor on China tech policy, says the Chinese military has many ways to acquire them. "There are ample cutouts, subsidiaries, and shell companies that the PLA can and does use to source chips illicitly from American companies, including Nvidia," he told BI. Nvidia plans to sell H20 chips to China again Some national security policymakers and professionals have expressed concerns about making it easier for China to get some Nvidia chips. Twenty of them sent a letter to the Commerce Department on Monday asking the US to reimpose the H20 ban, saying the chips would "support China's military." "This reversal will certainly open the floodgates," said Craig Singleton, a senior director at the Foundation for Defense of Democracies who co-signed the letter. The Commerce Department told BI that the current administration has tightened controls compared to its predecessor. "The Trump Administration will consider any H20 license applications carefully, accounting for both the benefits and the costs of potential exports from America and taking into account the views of experts across the US Government," a Commerce spokesperson said. 'Nobody can beat Nvidia' Fedasiuk reviewed some of the records found by BI and said they show that the Chinese military, like Chinese AI companies, wants to use the best hardware possible, and domestic chips like Huawei's don't cut it. "In terms of sheer processing power that a given chip is capable of bringing to bear, nobody can beat Nvidia. Huawei is not close," Fedasiuk said. Nvidia CEO Jensen Huang has downplayed such concerns, saying China's military doesn't want to rely on US technology. "They simply can't rely on it," Huang said in July on CNN. "It could, of course, be limited at any time."


Zawya
06-03-2025
- Business
- Zawya
SambaNova Cloud launches the fastest DeepSeek-R1 671B
Dubai, United Arab Emirates: DeepSeek-R1 671B, the best open source reasoning model in the market, is now available on SambaNova Cloud running at speeds of 198 tokens/second/prompt. DeepSeek showed the world how to reduce the training costs for building reasoning models, but inference with GPUs has remained a challenge until today when SambaNova showed how a new hardware architecture with RDUs can achieve better inference performance. These speeds have been independently verified by Artificial Analysis and you can sign up for SambaNova Cloud today to try it in our playground. Developers who are looking to use this model via the API on the SambaNova Cloud Developer Tier can sign up today for our waitlist. We will be slowly rolling out access over the coming weeks as we rapidly scale out capacity for this model. About DeepSeek-R1 (the real deal, not distilled) DeepSeek-R1 caught the world by storm, offering higher reasoning capabilities at a fraction of the cost of its competitors and being completely open sourced. This groundbreaking model, built on a Mixture of Experts (MoE) architecture with 671 billion parameters, showcases superior performance in math and reasoning tasks, even outperforming OpenAI's o1 on certain benchmarks. SambaNova is a US based company that runs the model on our RDU hardware in US data centers. Companies can also choose to work with SambaNova to deploy our hardware and the DeepSeek model on-premise in their own data centers for maximum data privacy and security. This is unlike the service run by the company DeepSeek (not the model), which runs their cloud service on GPUs, without providing any controls for data privacy. Unlike the 70B distilled version of the model (also available today on the SambaNova Cloud Developer tier), DeepSeek-R1 uses reasoning to completely outclass the distilled versions in terms of accuracy. As a reasoning model, R1 uses more tokens to think before generating an answer, which allows the model to generate much more accurate and thoughtful answers. For example, it was able to reason and determine how to improve the efficiency of running itself (Reddit), which is not possible without reasoning capabilities. 100X the Global Inference Compute of DeepSeek-R1 There is no shortage of demand for R1 given its performance and cost, but given that DeepSeek-R1 is a reasoning model that generates more tokens during run time, developers unfortunately today are compute constrained to get enough access to R1 because of the inefficiencies of the GPU. GPU inefficiency is one of the main reasons why DeepSeek had to disable their own inference API service. SambaNova RDU chips are perfectly designed to handle big Mixture of Expert models, like DeepSeek-R1, thanks to our dataflow architecture and three-tier memory design of the SN40L RDU. This design allows us to optimally deploy these types of models using just one rack to deliver large performance gains instead of the 40 racks of 320 GPUs that were used to power DeepSeek's inference. To learn more about the RDU and our unique architectural advantage, read our blog. Thanks to the efficiency of our RDU chips, SambaNova expects to be serving 100X the global demand for the DeepSeek-R1 model by the end of the year. This makes SambaNova RDU chips the most efficient inference platform for running reasoning models like DeepSeek-R1. Improve Software Development with R1 Check out demos from our friends at Hugging Face and BlackBox showing the advantages of coding significantly better with R1. In CyberCoder, BlackBox is able to use R1 to significantly improve the performance of coding agents, which is one of the primary use cases for developers using the R1 Model. For media enquiries: Emad Abdo Emad@
Yahoo
25-02-2025
- Business
- Yahoo
Should Nvidia be worried? Plucky inference rival replaces 320 Nvidia GPUs with 16 reconfigurable dataflow units
When you buy through links on our articles, Future and its syndication partners may earn a commission. SambaNova runs DeepSeek-R1 at 198 tokens/sec using 16 custom chips The SN40L RDU chip is reportedly 3X faster, 5X more efficient than GPUs 5X speed boost is promised soon, with 100X capacity by year-end on cloud Chinese AI upstart DeepSeek has very quickly made a name for itself in 2025, with its R1 large-scale open source language model, built for advanced reasoning tasks, showing performance on par with the industry's top models, while being more cost-efficient. SambaNova Systems, an AI startup founded in 2017 by experts from Sun/Oracle and Stanford University, has now announced what it claims is the world's fastest deployment of the DeepSeek-R1 671B LLM to date. The company says it has achieved 198 tokens per second, per user, using just 16 custom-built chips, replacing the 40 racks of 320 Nvidia GPUs that would typically be required. 'Powered by the SN40L RDU chip, SambaNova is the fastest platform running DeepSeek,' said Rodrigo Liang, CEO and co-founder of SambaNova. 'This will increase to 5X faster than the latest GPU speed on a single rack - and by year-end, we will offer 100X capacity for DeepSeek-R1.' While Nvidia's GPUs have traditionally powered large AI workloads, SambaNova argues that its reconfigurable dataflow architecture offers a more efficient solution. The company claims its hardware delivers three times the speed and five times the efficiency of leading GPUs while maintaining the full reasoning power of DeepSeek-R1. 'DeepSeek-R1 is one of the most advanced frontier AI models available, but its full potential has been limited by the inefficiency of GPUs,' said Liang. 'That changes today. We're bringing the next major breakthrough - collapsing inference costs and reducing hardware requirements from 40 racks to just one - to offer DeepSeek-R1 at the fastest speeds, efficiently.' George Cameron, co-founder of AI evaluating firm Artificial Analysis, said his company had 'independently benchmarked SambaNova's cloud deployment of the full 671 billion parameter DeepSeek-R1 Mixture of Experts model at over 195 output tokens/s, the fastest output speed we have ever measured for DeepSeek-R1. High output speeds are particularly important for reasoning models, as these models use reasoning output tokens to improve the quality of their responses. SambaNova's high output speeds will support the use of reasoning models in latency-sensitive use cases.' DeepSeek-R1 671B is now available on SambaNova Cloud, with API access offered to select users. The company is scaling capacity rapidly, and says it hopes to reach 20,000 tokens per second of total rack throughput "in the near future". Nvidia and AMD trade blows over who is faster on DeepSeek AI benchmarks A look at the Nvidia GPU that powers DeepSeek's AI global ambition AI phenomenon DeepSeek is officially growing faster than ChatGPT