Latest news with #Llama4Maverick


Channel Post MEA
18 hours ago
- Business
- Channel Post MEA
SambaNova Launches Its AI Platform in AWS Marketplace
SambaNova has announced that its AI platform is now available in AWS Marketplace, a digital catalog that helps you find, buy, deploy, and manage software, data products, and professional services from thousands of vendors. This availability allows organizations to seamlessly purchase and deploy SambaNova's fast inference services alongside their existing infrastructure in AWS. This new availability marks a significant milestone in SambaNova's mission to make private, production-grade AI more accessible to enterprises, removing traditional barriers like vendor onboarding and procurement delays. By leveraging existing AWS relationships, organizations can now begin using SambaNova's advanced inference solutions with a few simple clicks — accelerating time to value while maintaining trusted billing and infrastructure practices. 'Enterprises face significant pressure to move rapidly from AI experimentation to full-scale production, yet procurement and integration challenges often stand in the way,' said Rodrigo Liang, CEO and co-founder of SambaNova. 'By offering SambaNova's platform in AWS Marketplace, we remove those obstacles, enabling organizations to access our industry leading inference solutions instantly, using the procurement processes and a cloud environment they already trust.' Accelerating Access to High-Performance Inference SambaNova's listing in AWS Marketplace gives customers the ability to: Procure through existing AWS billing arrangements — no new vendor setup required. Leverage SambaNova's inference performance — fast and efficiently, running open source models like Llama 4 Maverick and DeepSeek R1 671B. Engage securely via private connectivity — possible through AWS PrivateLink for low-latency, secure integration between AWS workloads and SambaNova Cloud. 'With the SambaNova platform running in AWS Marketplace, organizations gain access to secure, high-speed inference from the largest open-source models. Solutions like this will help businesses move from experimentation to full production with AI,' said Michele Rosen, Research Manager, Open GenAI, LLMs, and the Evolving Open Source, IDC. This tight integration enables customers to deploy high-performance, multi-tenant inference solutions without the need to purchase or manage custom hardware — expanding SambaNova's reach into enterprise environments where time-to-value and IT friction have historically limited adoption. Making High-Performance Inference More Accessible With this listing in AWS Marketplace, SambaNova is meeting enterprise customers where they already are — within their trusted cloud environments and procurement frameworks. By removing onboarding friction and offering seamless integration, SambaNova makes it easier than ever for organizations to evaluate, deploy, and scale high-performance inference solutions. 'This makes it dramatically easier for customers to start using SambaNova — no new contracts, no long onboarding — just click and go,' said Liang. 0 0


Business Wire
3 days ago
- Business
- Business Wire
Cerebras Beats NVIDIA Blackwell in Llama 4 Maverick Inference
SUNNYVALE, Calif.--(BUSINESS WIRE)--Last week, Nvidia announced that 8 Blackwell GPUs in a DGX B200 could demonstrate 1,000 tokens per second (TPS) per user on Meta's Llama 4 Maverick. Today, the same independent benchmark firm Artificial Analysis measured Cerebras at more than 2,500 TPS/user, more than doubling the performance of Nvidia's flagship solution. 'Cerebras has beaten the Llama 4 Maverick inference speed record set by NVIDIA last week. Artificial Analysis benchmarked Cerebras' Llama 4 Maverick endpoint at 2,522 t/s compared to NVIDIA Blackwell's 1,038 t/s for the same model." - Artificial Analysis Share 'Cerebras has beaten the Llama 4 Maverick inference speed record set by NVIDIA last week,' said Micah Hill-Smith, Co-Founder and CEO of Artificial Analysis. 'Artificial Analysis has benchmarked Cerebras' Llama 4 Maverick endpoint at 2,522 tokens per second, compared to NVIDIA Blackwell's 1,038 tokens per second for the same model. We've tested dozens of vendors, and Cerebras is the only inference solution that outperforms Blackwell for Meta's flagship model.' With today's results, Cerebras has set a world record for LLM inference speed on the 400B parameter Llama 4 Maverick model, the largest and most powerful in the Llama 4 family. Artificial Analysis tested multiple other vendors, and the results were as follows: SambaNova 794 t/s, Amazon 290 t/s, Groq 549 t/s, Google 125 t/s, and Microsoft Azure 54 t/s. Andrew Feldman, CEO of Cerebras Systems, said, 'The most important AI applications being deployed in enterprise today—agents, code generation, and complex reasoning—are bottlenecked by inference latency. These use cases often involve multi-step chains of thought or large-scale retrieval and planning, with generation speeds as low as 100 tokens per second on GPUs, causing wait times of minutes and making production deployment impractical. Cerebras has led the charge in redefining inference performance across models like Llama, DeepSeek, and Qwen, regularly delivering over 2,500 TPS/user.' With its world record performance, Cerebras is the optimal solution for Llama 4 in any deployment scenario. Not only is Cerebras Inference the first and only API to break the 2,500 TPS/user milestone on this model, but unlike the Nvidia Blackwell used in the Artificial Analysis benchmark, the Cerebras hardware and API are available now. Nvidia used custom software optimizations that are not available to most users. Interestingly, none of the Nvidia's inference providers offer a service at Nvidia's published performance. This suggests that in order to achieve 1000 TPS/user, Nvidia was forced to reduce throughput by going to batch size 1 or 2, leaving the GPUs at less than 1% utilization. Cerebras, on the other hand, achieved this record-breaking performance without any special kernel optimizations, and it will be available to everyone through Meta's API service coming soon. For cutting-edge AI applications such as reasoning, voice, and agentic workflows, speed is paramount. These AI applications gain intelligence by processing more tokens during the inference process. This can also make them slow and force customers to wait. And when customers are forced to wait, they leave and go to competitors who provide answers faster—a finding Google showed with search more than a decade ago. With record-breaking performance, Cerebras hardware and resulting API service is the best choice for developers and enterprise AI users around the world. For more information, please visit
Yahoo
3 days ago
- Business
- Yahoo
Cerebras Beats NVIDIA Blackwell in Llama 4 Maverick Inference
Cerebras Breaks the 2,500 Tokens Per Second Barrier with Llama 4 Maverick 400B SUNNYVALE, Calif., May 28, 2025--(BUSINESS WIRE)--Last week, Nvidia announced that 8 Blackwell GPUs in a DGX B200 could demonstrate 1,000 tokens per second (TPS) per user on Meta's Llama 4 Maverick. Today, the same independent benchmark firm Artificial Analysis measured Cerebras at more than 2,500 TPS/user, more than doubling the performance of Nvidia's flagship solution. "Cerebras has beaten the Llama 4 Maverick inference speed record set by NVIDIA last week," said Micah Hill-Smith, Co-Founder and CEO of Artificial Analysis. "Artificial Analysis has benchmarked Cerebras' Llama 4 Maverick endpoint at 2,522 tokens per second, compared to NVIDIA Blackwell's 1,038 tokens per second for the same model. We've tested dozens of vendors, and Cerebras is the only inference solution that outperforms Blackwell for Meta's flagship model." With today's results, Cerebras has set a world record for LLM inference speed on the 400B parameter Llama 4 Maverick model, the largest and most powerful in the Llama 4 family. Artificial Analysis tested multiple other vendors, and the results were as follows: SambaNova 794 t/s, Amazon 290 t/s, Groq 549 t/s, Google 125 t/s, and Microsoft Azure 54 t/s. Andrew Feldman, CEO of Cerebras Systems, said, "The most important AI applications being deployed in enterprise today—agents, code generation, and complex reasoning—are bottlenecked by inference latency. These use cases often involve multi-step chains of thought or large-scale retrieval and planning, with generation speeds as low as 100 tokens per second on GPUs, causing wait times of minutes and making production deployment impractical. Cerebras has led the charge in redefining inference performance across models like Llama, DeepSeek, and Qwen, regularly delivering over 2,500 TPS/user." With its world record performance, Cerebras is the optimal solution for Llama 4 in any deployment scenario. Not only is Cerebras Inference the first and only API to break the 2,500 TPS/user milestone on this model, but unlike the Nvidia Blackwell used in the Artificial Analysis benchmark, the Cerebras hardware and API are available now. Nvidia used custom software optimizations that are not available to most users. Interestingly, none of the Nvidia's inference providers offer a service at Nvidia's published performance. This suggests that in order to achieve 1000 TPS/user, Nvidia was forced to reduce throughput by going to batch size 1 or 2, leaving the GPUs at less than 1% utilization. Cerebras, on the other hand, achieved this record-breaking performance without any special kernel optimizations, and it will be available to everyone through Meta's API service coming soon. For cutting-edge AI applications such as reasoning, voice, and agentic workflows, speed is paramount. These AI applications gain intelligence by processing more tokens during the inference process. This can also make them slow and force customers to wait. And when customers are forced to wait, they leave and go to competitors who provide answers faster—a finding Google showed with search more than a decade ago. With record-breaking performance, Cerebras hardware and resulting API service is the best choice for developers and enterprise AI users around the world. For more information, please visit View source version on Contacts pr@
Yahoo
21-05-2025
- Business
- Yahoo
Meta launches program to encourage startups to use its Llama AI models
Meta is launching a new program to incentivize startups to adopt its Llama AI models. The program, Llama for Startups, provides companies "direct support" from Meta's Llama team, as well as funding in certain cases. Any U.S.-based firm that is incorporated, has raised less than $10 million in funding, has at least one developer on staff, and is building generative AI applications is eligible to apply by the May 30 deadline. "Members may receive up to $6,000 per month for up to six months to help them offset the costs of building and enhancing their generative AI solutions," Meta wrote in a blog post. "Our experts will work closely with them to get started and explore advanced use cases of Llama that could benefit their startups." The launch of the Llama startup program comes as Meta tries to cement its lead in the fiercely competitive open model space. While the tech giant's Llama models have racked up more than a billion downloads to date, rivals such as DeepSeek, Google, and Alibaba's Qwen threaten to upend Meta's efforts to establish a far-reaching model ecosystem. Not helping matters, Llama has suffered several setbacks over the past few months. The Wall Street Journal last week reported Meta has delayed the rollout of a flagship AI model, Llama 4 Behemoth, over concerns the model underperforms on key benchmarks. In April, Meta had to fend off allegations that it cheated on a popular crowdsourced AI benchmark, LM Arena. The company used a version of its Llama 4 Maverick model "optimized for conversationality" to achieve a high score on LM Arena, but released a different version of Maverick publicly. Meta has huge ambitions for Llama — and its broader generative AI portfolio. Last year, the company made a prediction its generative AI products would rake in $2 billion to $3 billion in revenue in 2025, and between $460 billion and $1.4 trillion by 2035. Meta has revenue-sharing agreements with some companies that host its Llama models. The company recently launched an API for customizing Llama releases. And Meta AI, the company's AI assistant powered by Llama, may eventually show ads and offer a subscription with additional features, CEO Mark Zuckerberg said during the company's Q1 earnings call. These products have proven costly to build. In 2024, Meta's "GenAI" budget was more than $900 million, and this year, it could exceed $1 billion. That's not including the infrastructure needed to run and train the models. Meta previously said it plans to spend $60 billion to $80 billion on capital expenditures in 2025, primarily on new data centers. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


RTÉ News
16-05-2025
- Business
- RTÉ News
Meta delays release of its 'Behemoth' AI model
Meta Platforms is delaying the release of its flagship "Behemoth" AI model due to concerns about its capabilities, the Wall Street Journal has reported, citing people familiar with the matter. Company engineers are struggling to significantly improve the capabilities of its Behemoth large-language model, resulting in staff questions about whether improvements over earlier versions are significant enough to justify public release, the report said. Early in its development, Behemoth was internally scheduled for release in April to coincide with Meta's inaugural AI conference for developers, but later pushed an internal target for the model's launch to June, according to the report. It has now been delayed to fall or later, the report said. Meta had said in April it was previewing Llama 4 Behemoth, which it called "one of the smartest LLMs in the world and our most powerful yet to serve as a teacher for our new models". It released the latest version of its LLM Llama, called the Llama 4 Scout and Llama 4 Maverick, that month.