logo
#

Latest news with #SambaNova

SambaNova Launches Its AI Platform in AWS Marketplace
SambaNova Launches Its AI Platform in AWS Marketplace

Channel Post MEA

time3 days ago

  • Business
  • Channel Post MEA

SambaNova Launches Its AI Platform in AWS Marketplace

SambaNova has announced that its AI platform is now available in AWS Marketplace, a digital catalog that helps you find, buy, deploy, and manage software, data products, and professional services from thousands of vendors. This availability allows organizations to seamlessly purchase and deploy SambaNova's fast inference services alongside their existing infrastructure in AWS. This new availability marks a significant milestone in SambaNova's mission to make private, production-grade AI more accessible to enterprises, removing traditional barriers like vendor onboarding and procurement delays. By leveraging existing AWS relationships, organizations can now begin using SambaNova's advanced inference solutions with a few simple clicks — accelerating time to value while maintaining trusted billing and infrastructure practices. 'Enterprises face significant pressure to move rapidly from AI experimentation to full-scale production, yet procurement and integration challenges often stand in the way,' said Rodrigo Liang, CEO and co-founder of SambaNova. 'By offering SambaNova's platform in AWS Marketplace, we remove those obstacles, enabling organizations to access our industry leading inference solutions instantly, using the procurement processes and a cloud environment they already trust.' Accelerating Access to High-Performance Inference SambaNova's listing in AWS Marketplace gives customers the ability to: Procure through existing AWS billing arrangements — no new vendor setup required. Leverage SambaNova's inference performance — fast and efficiently, running open source models like Llama 4 Maverick and DeepSeek R1 671B. Engage securely via private connectivity — possible through AWS PrivateLink for low-latency, secure integration between AWS workloads and SambaNova Cloud. 'With the SambaNova platform running in AWS Marketplace, organizations gain access to secure, high-speed inference from the largest open-source models. Solutions like this will help businesses move from experimentation to full production with AI,' said Michele Rosen, Research Manager, Open GenAI, LLMs, and the Evolving Open Source, IDC. This tight integration enables customers to deploy high-performance, multi-tenant inference solutions without the need to purchase or manage custom hardware — expanding SambaNova's reach into enterprise environments where time-to-value and IT friction have historically limited adoption. Making High-Performance Inference More Accessible With this listing in AWS Marketplace, SambaNova is meeting enterprise customers where they already are — within their trusted cloud environments and procurement frameworks. By removing onboarding friction and offering seamless integration, SambaNova makes it easier than ever for organizations to evaluate, deploy, and scale high-performance inference solutions. 'This makes it dramatically easier for customers to start using SambaNova — no new contracts, no long onboarding — just click and go,' said Liang. 0 0

SambaNova launches its AI Platform in AWS Marketplace
SambaNova launches its AI Platform in AWS Marketplace

Zawya

time3 days ago

  • Business
  • Zawya

SambaNova launches its AI Platform in AWS Marketplace

Dubai, United Arab Emirates — SambaNova, the AI inference company delivering fast, efficient AI chips and high-performance models, today announced that its AI platform is now available in AWS Marketplace, a digital catalog that helps you find, buy, deploy, and manage software, data products, and professional services from thousands of vendors. This availability allows organizations to seamlessly purchase and deploy SambaNova's fast inference services alongside their existing infrastructure in AWS. This new availability marks a significant milestone in SambaNova's mission to make private, production-grade AI more accessible to enterprises, removing traditional barriers like vendor onboarding and procurement delays. By leveraging existing AWS relationships, organizations can now begin using SambaNova's advanced inference solutions with a few simple clicks —accelerating time to value while maintaining trusted billing and infrastructure practices. 'Enterprises face significant pressure to move rapidly from Ai experimentation to full-scale production, yet procurement and integration challenges often stand in the way,' said Rodrigo Liang, CEO and co-founder of SambaNova. 'By offering SambaNova's platform in AWS Marketplace, we remove those obstacles, enabling organizations to access our industry leading inference solutions instantly, using the procurement processes and cloud environment they already trust. Accelerating Access to High-Performance Inference SambaNova's listing in AWS Marketplace gives customers the ability to: Procure through existing AWS billing arrangements — no new vendor setup required. Leverage SambaNova's inference performance — fast and efficiently, running open source models like Llama 4 Maverick and DeepSeek R1 671B. Engage securely via private connectivity — possible through AWS PrivateLink for low-latency, secure integration between AWS workloads and SambaNova Cloud. 'With the SambaNova platform running in AWS Marketplace, organizations gain access to secure, high-speed inference from the largest open-source models. Solutions like this will help businesses move from experimentation to full production with AI,' said Michele Rosen Research Manager, Open GenAI, LLMs, and the Evolving Open Source, IDC. This tight integration enables customers to deploy high-performance, multi-tenant inference solutions without the need to purchase or manage custom hardware—expanding SambaNova's reach into enterprise environments where time-to-value and IT friction have historically limited adoption. Making High-Performance Inference More Accessible With this listing in AWS Marketplace, SambaNova is meeting enterprise customers where they already are—within their trusted cloud environments and procurement frameworks. By removing onboarding friction and offering seamless integration, SambaNova makes it easier than ever for organizations to evaluate, deploy, and scale high-performance inference solutions. 'This makes it dramatically easier for customers to start using SambaNova—no new contracts, no long onboarding — just click and go,' said Liang. Availability SambaNova's inference platform is available immediately in AWS Marketplace. Enterprise customers can visit the SambaNova listing in AWS Marketplace to get started. About SambaNova Customers turn to SambaNova to quickly deploy state-of-the-art generative AI capabilities within the enterprise. Our purpose-built enterprise-scale AI platform is the technology backbone for the next generation of AI computing. Headquartered in Palo Alto, California, SambaNova Systems was founded in 2017 by industry luminaries, and hardware and software design experts from Sun/Oracle and Stanford University. Investors include SoftBank Vision Fund 2, funds and accounts managed by BlackRock, Intel Capital, GV, Walden International, Temasek, GIC, Redline Capital, Atlantic Bridge Ventures, Celesta, and several others.

New program backs 20 AI startups in Saudi Arabia
New program backs 20 AI startups in Saudi Arabia

Arab News

time21-05-2025

  • Business
  • Arab News

New program backs 20 AI startups in Saudi Arabia

RIYADH: The Ministry of Communications and Information Technology has launched a specialized incubator to support the growth of artificial intelligence startups, the Saudi Press Agency reported on Wednesday. This initiative by the ministry's Center of Digital Entrepreneurship strengthens the Kingdom's position as a regional innovation hub and reinforces AI's role as a key driver of digital economic growth. The program includes 20 AI startups, empowering innovators to turn ideas into viable tech solutions, according to SPA. The four-month program targets tech enthusiasts, experts, and industry leaders. It offers training, financial support from the National Technology Development Program, mentorship, digital incentives, networking opportunities, and office space. In collaboration with the Saudi Data and AI Authority, the Saudi Company for Artificial Intelligence, SambaNova, and BIM Ventures, the program fosters a supportive environment for innovation and entrepreneurship.

SambaNova Cloud launches the fastest DeepSeek-R1 671B
SambaNova Cloud launches the fastest DeepSeek-R1 671B

Zawya

time06-03-2025

  • Business
  • Zawya

SambaNova Cloud launches the fastest DeepSeek-R1 671B

Dubai, United Arab Emirates: DeepSeek-R1 671B, the best open source reasoning model in the market, is now available on SambaNova Cloud running at speeds of 198 tokens/second/prompt. DeepSeek showed the world how to reduce the training costs for building reasoning models, but inference with GPUs has remained a challenge until today when SambaNova showed how a new hardware architecture with RDUs can achieve better inference performance. These speeds have been independently verified by Artificial Analysis and you can sign up for SambaNova Cloud today to try it in our playground. Developers who are looking to use this model via the API on the SambaNova Cloud Developer Tier can sign up today for our waitlist. We will be slowly rolling out access over the coming weeks as we rapidly scale out capacity for this model. About DeepSeek-R1 (the real deal, not distilled) DeepSeek-R1 caught the world by storm, offering higher reasoning capabilities at a fraction of the cost of its competitors and being completely open sourced. This groundbreaking model, built on a Mixture of Experts (MoE) architecture with 671 billion parameters, showcases superior performance in math and reasoning tasks, even outperforming OpenAI's o1 on certain benchmarks. SambaNova is a US based company that runs the model on our RDU hardware in US data centers. Companies can also choose to work with SambaNova to deploy our hardware and the DeepSeek model on-premise in their own data centers for maximum data privacy and security. This is unlike the service run by the company DeepSeek (not the model), which runs their cloud service on GPUs, without providing any controls for data privacy. Unlike the 70B distilled version of the model (also available today on the SambaNova Cloud Developer tier), DeepSeek-R1 uses reasoning to completely outclass the distilled versions in terms of accuracy. As a reasoning model, R1 uses more tokens to think before generating an answer, which allows the model to generate much more accurate and thoughtful answers. For example, it was able to reason and determine how to improve the efficiency of running itself (Reddit), which is not possible without reasoning capabilities. 100X the Global Inference Compute of DeepSeek-R1 There is no shortage of demand for R1 given its performance and cost, but given that DeepSeek-R1 is a reasoning model that generates more tokens during run time, developers unfortunately today are compute constrained to get enough access to R1 because of the inefficiencies of the GPU. GPU inefficiency is one of the main reasons why DeepSeek had to disable their own inference API service. SambaNova RDU chips are perfectly designed to handle big Mixture of Expert models, like DeepSeek-R1, thanks to our dataflow architecture and three-tier memory design of the SN40L RDU. This design allows us to optimally deploy these types of models using just one rack to deliver large performance gains instead of the 40 racks of 320 GPUs that were used to power DeepSeek's inference. To learn more about the RDU and our unique architectural advantage, read our blog. Thanks to the efficiency of our RDU chips, SambaNova expects to be serving 100X the global demand for the DeepSeek-R1 model by the end of the year. This makes SambaNova RDU chips the most efficient inference platform for running reasoning models like DeepSeek-R1. Improve Software Development with R1 Check out demos from our friends at Hugging Face and BlackBox showing the advantages of coding significantly better with R1. In CyberCoder, BlackBox is able to use R1 to significantly improve the performance of coding agents, which is one of the primary use cases for developers using the R1 Model. For media enquiries: Emad Abdo Emad@

Should Nvidia be worried? Plucky inference rival replaces 320 Nvidia GPUs with 16 reconfigurable dataflow units
Should Nvidia be worried? Plucky inference rival replaces 320 Nvidia GPUs with 16 reconfigurable dataflow units

Yahoo

time25-02-2025

  • Business
  • Yahoo

Should Nvidia be worried? Plucky inference rival replaces 320 Nvidia GPUs with 16 reconfigurable dataflow units

When you buy through links on our articles, Future and its syndication partners may earn a commission. SambaNova runs DeepSeek-R1 at 198 tokens/sec using 16 custom chips The SN40L RDU chip is reportedly 3X faster, 5X more efficient than GPUs 5X speed boost is promised soon, with 100X capacity by year-end on cloud Chinese AI upstart DeepSeek has very quickly made a name for itself in 2025, with its R1 large-scale open source language model, built for advanced reasoning tasks, showing performance on par with the industry's top models, while being more cost-efficient. SambaNova Systems, an AI startup founded in 2017 by experts from Sun/Oracle and Stanford University, has now announced what it claims is the world's fastest deployment of the DeepSeek-R1 671B LLM to date. The company says it has achieved 198 tokens per second, per user, using just 16 custom-built chips, replacing the 40 racks of 320 Nvidia GPUs that would typically be required. 'Powered by the SN40L RDU chip, SambaNova is the fastest platform running DeepSeek,' said Rodrigo Liang, CEO and co-founder of SambaNova. 'This will increase to 5X faster than the latest GPU speed on a single rack - and by year-end, we will offer 100X capacity for DeepSeek-R1.' While Nvidia's GPUs have traditionally powered large AI workloads, SambaNova argues that its reconfigurable dataflow architecture offers a more efficient solution. The company claims its hardware delivers three times the speed and five times the efficiency of leading GPUs while maintaining the full reasoning power of DeepSeek-R1. 'DeepSeek-R1 is one of the most advanced frontier AI models available, but its full potential has been limited by the inefficiency of GPUs,' said Liang. 'That changes today. We're bringing the next major breakthrough - collapsing inference costs and reducing hardware requirements from 40 racks to just one - to offer DeepSeek-R1 at the fastest speeds, efficiently.' George Cameron, co-founder of AI evaluating firm Artificial Analysis, said his company had 'independently benchmarked SambaNova's cloud deployment of the full 671 billion parameter DeepSeek-R1 Mixture of Experts model at over 195 output tokens/s, the fastest output speed we have ever measured for DeepSeek-R1. High output speeds are particularly important for reasoning models, as these models use reasoning output tokens to improve the quality of their responses. SambaNova's high output speeds will support the use of reasoning models in latency-sensitive use cases.' DeepSeek-R1 671B is now available on SambaNova Cloud, with API access offered to select users. The company is scaling capacity rapidly, and says it hopes to reach 20,000 tokens per second of total rack throughput "in the near future". Nvidia and AMD trade blows over who is faster on DeepSeek AI benchmarks A look at the Nvidia GPU that powers DeepSeek's AI global ambition AI phenomenon DeepSeek is officially growing faster than ChatGPT

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store