Latest news with #SambaNovaCloud


Channel Post MEA
4 days ago
- Business
- Channel Post MEA
SambaNova Launches Its AI Platform in AWS Marketplace
SambaNova has announced that its AI platform is now available in AWS Marketplace, a digital catalog that helps you find, buy, deploy, and manage software, data products, and professional services from thousands of vendors. This availability allows organizations to seamlessly purchase and deploy SambaNova's fast inference services alongside their existing infrastructure in AWS. This new availability marks a significant milestone in SambaNova's mission to make private, production-grade AI more accessible to enterprises, removing traditional barriers like vendor onboarding and procurement delays. By leveraging existing AWS relationships, organizations can now begin using SambaNova's advanced inference solutions with a few simple clicks — accelerating time to value while maintaining trusted billing and infrastructure practices. 'Enterprises face significant pressure to move rapidly from AI experimentation to full-scale production, yet procurement and integration challenges often stand in the way,' said Rodrigo Liang, CEO and co-founder of SambaNova. 'By offering SambaNova's platform in AWS Marketplace, we remove those obstacles, enabling organizations to access our industry leading inference solutions instantly, using the procurement processes and a cloud environment they already trust.' Accelerating Access to High-Performance Inference SambaNova's listing in AWS Marketplace gives customers the ability to: Procure through existing AWS billing arrangements — no new vendor setup required. Leverage SambaNova's inference performance — fast and efficiently, running open source models like Llama 4 Maverick and DeepSeek R1 671B. Engage securely via private connectivity — possible through AWS PrivateLink for low-latency, secure integration between AWS workloads and SambaNova Cloud. 'With the SambaNova platform running in AWS Marketplace, organizations gain access to secure, high-speed inference from the largest open-source models. Solutions like this will help businesses move from experimentation to full production with AI,' said Michele Rosen, Research Manager, Open GenAI, LLMs, and the Evolving Open Source, IDC. This tight integration enables customers to deploy high-performance, multi-tenant inference solutions without the need to purchase or manage custom hardware — expanding SambaNova's reach into enterprise environments where time-to-value and IT friction have historically limited adoption. Making High-Performance Inference More Accessible With this listing in AWS Marketplace, SambaNova is meeting enterprise customers where they already are — within their trusted cloud environments and procurement frameworks. By removing onboarding friction and offering seamless integration, SambaNova makes it easier than ever for organizations to evaluate, deploy, and scale high-performance inference solutions. 'This makes it dramatically easier for customers to start using SambaNova — no new contracts, no long onboarding — just click and go,' said Liang. 0 0


Zawya
06-03-2025
- Business
- Zawya
SambaNova Cloud launches the fastest DeepSeek-R1 671B
Dubai, United Arab Emirates: DeepSeek-R1 671B, the best open source reasoning model in the market, is now available on SambaNova Cloud running at speeds of 198 tokens/second/prompt. DeepSeek showed the world how to reduce the training costs for building reasoning models, but inference with GPUs has remained a challenge until today when SambaNova showed how a new hardware architecture with RDUs can achieve better inference performance. These speeds have been independently verified by Artificial Analysis and you can sign up for SambaNova Cloud today to try it in our playground. Developers who are looking to use this model via the API on the SambaNova Cloud Developer Tier can sign up today for our waitlist. We will be slowly rolling out access over the coming weeks as we rapidly scale out capacity for this model. About DeepSeek-R1 (the real deal, not distilled) DeepSeek-R1 caught the world by storm, offering higher reasoning capabilities at a fraction of the cost of its competitors and being completely open sourced. This groundbreaking model, built on a Mixture of Experts (MoE) architecture with 671 billion parameters, showcases superior performance in math and reasoning tasks, even outperforming OpenAI's o1 on certain benchmarks. SambaNova is a US based company that runs the model on our RDU hardware in US data centers. Companies can also choose to work with SambaNova to deploy our hardware and the DeepSeek model on-premise in their own data centers for maximum data privacy and security. This is unlike the service run by the company DeepSeek (not the model), which runs their cloud service on GPUs, without providing any controls for data privacy. Unlike the 70B distilled version of the model (also available today on the SambaNova Cloud Developer tier), DeepSeek-R1 uses reasoning to completely outclass the distilled versions in terms of accuracy. As a reasoning model, R1 uses more tokens to think before generating an answer, which allows the model to generate much more accurate and thoughtful answers. For example, it was able to reason and determine how to improve the efficiency of running itself (Reddit), which is not possible without reasoning capabilities. 100X the Global Inference Compute of DeepSeek-R1 There is no shortage of demand for R1 given its performance and cost, but given that DeepSeek-R1 is a reasoning model that generates more tokens during run time, developers unfortunately today are compute constrained to get enough access to R1 because of the inefficiencies of the GPU. GPU inefficiency is one of the main reasons why DeepSeek had to disable their own inference API service. SambaNova RDU chips are perfectly designed to handle big Mixture of Expert models, like DeepSeek-R1, thanks to our dataflow architecture and three-tier memory design of the SN40L RDU. This design allows us to optimally deploy these types of models using just one rack to deliver large performance gains instead of the 40 racks of 320 GPUs that were used to power DeepSeek's inference. To learn more about the RDU and our unique architectural advantage, read our blog. Thanks to the efficiency of our RDU chips, SambaNova expects to be serving 100X the global demand for the DeepSeek-R1 model by the end of the year. This makes SambaNova RDU chips the most efficient inference platform for running reasoning models like DeepSeek-R1. Improve Software Development with R1 Check out demos from our friends at Hugging Face and BlackBox showing the advantages of coding significantly better with R1. In CyberCoder, BlackBox is able to use R1 to significantly improve the performance of coding agents, which is one of the primary use cases for developers using the R1 Model. For media enquiries: Emad Abdo Emad@