Latest news with #AndrewFeldman


Business Wire
3 days ago
- Business
- Business Wire
Cerebras Beats NVIDIA Blackwell in Llama 4 Maverick Inference
SUNNYVALE, Calif.--(BUSINESS WIRE)--Last week, Nvidia announced that 8 Blackwell GPUs in a DGX B200 could demonstrate 1,000 tokens per second (TPS) per user on Meta's Llama 4 Maverick. Today, the same independent benchmark firm Artificial Analysis measured Cerebras at more than 2,500 TPS/user, more than doubling the performance of Nvidia's flagship solution. 'Cerebras has beaten the Llama 4 Maverick inference speed record set by NVIDIA last week. Artificial Analysis benchmarked Cerebras' Llama 4 Maverick endpoint at 2,522 t/s compared to NVIDIA Blackwell's 1,038 t/s for the same model." - Artificial Analysis Share 'Cerebras has beaten the Llama 4 Maverick inference speed record set by NVIDIA last week,' said Micah Hill-Smith, Co-Founder and CEO of Artificial Analysis. 'Artificial Analysis has benchmarked Cerebras' Llama 4 Maverick endpoint at 2,522 tokens per second, compared to NVIDIA Blackwell's 1,038 tokens per second for the same model. We've tested dozens of vendors, and Cerebras is the only inference solution that outperforms Blackwell for Meta's flagship model.' With today's results, Cerebras has set a world record for LLM inference speed on the 400B parameter Llama 4 Maverick model, the largest and most powerful in the Llama 4 family. Artificial Analysis tested multiple other vendors, and the results were as follows: SambaNova 794 t/s, Amazon 290 t/s, Groq 549 t/s, Google 125 t/s, and Microsoft Azure 54 t/s. Andrew Feldman, CEO of Cerebras Systems, said, 'The most important AI applications being deployed in enterprise today—agents, code generation, and complex reasoning—are bottlenecked by inference latency. These use cases often involve multi-step chains of thought or large-scale retrieval and planning, with generation speeds as low as 100 tokens per second on GPUs, causing wait times of minutes and making production deployment impractical. Cerebras has led the charge in redefining inference performance across models like Llama, DeepSeek, and Qwen, regularly delivering over 2,500 TPS/user.' With its world record performance, Cerebras is the optimal solution for Llama 4 in any deployment scenario. Not only is Cerebras Inference the first and only API to break the 2,500 TPS/user milestone on this model, but unlike the Nvidia Blackwell used in the Artificial Analysis benchmark, the Cerebras hardware and API are available now. Nvidia used custom software optimizations that are not available to most users. Interestingly, none of the Nvidia's inference providers offer a service at Nvidia's published performance. This suggests that in order to achieve 1000 TPS/user, Nvidia was forced to reduce throughput by going to batch size 1 or 2, leaving the GPUs at less than 1% utilization. Cerebras, on the other hand, achieved this record-breaking performance without any special kernel optimizations, and it will be available to everyone through Meta's API service coming soon. For cutting-edge AI applications such as reasoning, voice, and agentic workflows, speed is paramount. These AI applications gain intelligence by processing more tokens during the inference process. This can also make them slow and force customers to wait. And when customers are forced to wait, they leave and go to competitors who provide answers faster—a finding Google showed with search more than a decade ago. With record-breaking performance, Cerebras hardware and resulting API service is the best choice for developers and enterprise AI users around the world. For more information, please visit
Yahoo
3 days ago
- Business
- Yahoo
Cerebras Beats NVIDIA Blackwell in Llama 4 Maverick Inference
Cerebras Breaks the 2,500 Tokens Per Second Barrier with Llama 4 Maverick 400B SUNNYVALE, Calif., May 28, 2025--(BUSINESS WIRE)--Last week, Nvidia announced that 8 Blackwell GPUs in a DGX B200 could demonstrate 1,000 tokens per second (TPS) per user on Meta's Llama 4 Maverick. Today, the same independent benchmark firm Artificial Analysis measured Cerebras at more than 2,500 TPS/user, more than doubling the performance of Nvidia's flagship solution. "Cerebras has beaten the Llama 4 Maverick inference speed record set by NVIDIA last week," said Micah Hill-Smith, Co-Founder and CEO of Artificial Analysis. "Artificial Analysis has benchmarked Cerebras' Llama 4 Maverick endpoint at 2,522 tokens per second, compared to NVIDIA Blackwell's 1,038 tokens per second for the same model. We've tested dozens of vendors, and Cerebras is the only inference solution that outperforms Blackwell for Meta's flagship model." With today's results, Cerebras has set a world record for LLM inference speed on the 400B parameter Llama 4 Maverick model, the largest and most powerful in the Llama 4 family. Artificial Analysis tested multiple other vendors, and the results were as follows: SambaNova 794 t/s, Amazon 290 t/s, Groq 549 t/s, Google 125 t/s, and Microsoft Azure 54 t/s. Andrew Feldman, CEO of Cerebras Systems, said, "The most important AI applications being deployed in enterprise today—agents, code generation, and complex reasoning—are bottlenecked by inference latency. These use cases often involve multi-step chains of thought or large-scale retrieval and planning, with generation speeds as low as 100 tokens per second on GPUs, causing wait times of minutes and making production deployment impractical. Cerebras has led the charge in redefining inference performance across models like Llama, DeepSeek, and Qwen, regularly delivering over 2,500 TPS/user." With its world record performance, Cerebras is the optimal solution for Llama 4 in any deployment scenario. Not only is Cerebras Inference the first and only API to break the 2,500 TPS/user milestone on this model, but unlike the Nvidia Blackwell used in the Artificial Analysis benchmark, the Cerebras hardware and API are available now. Nvidia used custom software optimizations that are not available to most users. Interestingly, none of the Nvidia's inference providers offer a service at Nvidia's published performance. This suggests that in order to achieve 1000 TPS/user, Nvidia was forced to reduce throughput by going to batch size 1 or 2, leaving the GPUs at less than 1% utilization. Cerebras, on the other hand, achieved this record-breaking performance without any special kernel optimizations, and it will be available to everyone through Meta's API service coming soon. For cutting-edge AI applications such as reasoning, voice, and agentic workflows, speed is paramount. These AI applications gain intelligence by processing more tokens during the inference process. This can also make them slow and force customers to wait. And when customers are forced to wait, they leave and go to competitors who provide answers faster—a finding Google showed with search more than a decade ago. With record-breaking performance, Cerebras hardware and resulting API service is the best choice for developers and enterprise AI users around the world. For more information, please visit View source version on Contacts pr@


CNBC
16-05-2025
- Business
- CNBC
Cerebras CEO says chipmaker's 'aspiration' is to hold IPO in 2025
Cerebras CEO Andrew Feldman said his hope is to take his company public in 2025 now that the chipmaker has obtained clearance from the U.S. government to sell shares to an entity in the United Arab Emirates. "That's our aspiration," Feldman told reporters on Thursday at the company's Supernova conference in San Francisco, after being asked if an IPO was likely this year. Cerebras, which makes processors for artificial intelligence workloads, filed to go public in September but hasn't provided an update on the expected size or timing of an offering. In March, the company said it had obtained clearance from a U.S. committee to sell shares to Group 42, a Microsoft-backed AI company based in the UAE. That clearance came from the Committee on Foreign Investment in the United States, or CFIUS, and marked a key step for Cerebras in its effort to go public. Cerebras competes with Nvidia, whose graphics processing units (GPUs) are the industry's choice for training and running AI models. More than 85% of Cerebras' revenue in the first half of 2024 came from Group 42. The tech IPO market broadly has been in a drought since early 2022, when rising inflation and higher interest rates pushed investors out of risky assets. Cerebras appeared poised to be the first notable pure-play AI IPO after its filing, but then the came the delay. CoreWeave, which provides AI infrastructure, debuted in March and has seen its market value jump about 65% since its IPO. The IPO market is showing signs of life, with trading app eToro hitting the Nasdaq this week and digital health provider Hinge Health scheduled to go out next week. The Middle East is becoming a more critical market for AI development. Nvidia CEO Jensen Huang was in Riyadh, Saudi Arabia this week along with other tech leaders and President Donald Trump for the Saudi-U.S. Investment Forum. Nvidia said at the event that it will sell more than 18,000 of its latest AI chips to Saudi company Humain. Group 42 is also reportedly on tap to purchase 100,000 GPUs a year as part of a bigger agreement between the U.S. and UAE. Feldman said at the roundtable with reporters that it's "important to be among the big dogs" and said, regarding the latest announcements, "You've got half the story. I can't share the other half." In addition to Microsoft, Cerebras sells to Meta and IBM. Feldman said last year that the company would have another "hyperscaler" within the first half of 2025. "We're close with another," he said on Thursday. "I think they haven't been the quickest to respond." Earlier in the day, Cerebras announced the ability to run an open-source model from Alibaba on its chips at what it says is a lower price than what OpenAI's GPT-4.1 model charges, and at a higher speed.
Yahoo
15-05-2025
- Business
- Yahoo
Cerebras Launches Qwen3-32B: Real-Time Reasoning with One of the World's Most Powerful Open Models
SUNNYVALE, Calif., May 15, 2025--(BUSINESS WIRE)--Cerebras today announced the launch of Qwen3-32B, one of the most advanced open-weight models in the world, now available on the Cerebras Inference Platform. Developed by Alibaba, Qwen3-32B rivals the performance of leading closed models like GPT-4.1 and DeepSeek R1—and now, for the first time, it runs on Cerebras with real-time responsiveness. Qwen3-32B on Cerebras performs sophisticated reasoning and returns the answer in just 1.2 seconds — up to 60x faster than comparable reasoning models such as DeepSeek R1 and OpenAI o3. This is the first reasoning model on any hardware to achieve real-time reasoning. Qwen3-32B on Cerebras is the fastest reasoning model API in the world, ready to power production-grade agents, copilots, and automation workloads. "This is the first time a world-class reasoning model—on par with DeepSeek R1 and OpenAI's o-series — can return answers instantly," said Andrew Feldman, CEO and co-founder of Cerebras. "It's not just fast for a big model. It's fast enough to reshape how real-time AI gets built." The First Real-Time Reasoning Model Reasoning models are widely recognized as the most powerful class of large language models—capable of multi-step logic, tool use, and structured decision-making. But until now, they've come with a tradeoff: latency. Inference often takes 30–90 seconds, making them impractical for responsive user experiences. Cerebras eliminates that bottleneck. Qwen3-32B delivers first-token latency in just one second, and completes full reasoning chains in real time. This is the only solution on the market today that combines high intelligence with real-time speed—and it's available now. Transparent, Scalable Pricing Qwen3-32B is available on Cerebras with simple, production-ready pricing: $0.40 per million input tokens $0.80 per million output tokens This is 10x cheaper than GPT-4.1, while offering comparable or better performance. All developers receive 1 million free tokens per day, with no waitlist. Qwen3-32B is fully open-weight and Apache 2.0 licensed, and can be integrated in seconds using standard OpenAI- or Claude-compatible endpoints. Qwen3-32B is live now on For teams seeking to build fast, intelligent, production-ready AI systems, it is the most powerful open model you can use today. About Cerebras Systems Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world's largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit or follow us on LinkedIn or X. View source version on Contacts Media Contact Press Contact: PR@ Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Business Wire
15-05-2025
- Business
- Business Wire
Cerebras Launches Qwen3-32B: Real-Time Reasoning with One of the World's Most Powerful Open Models
SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras today announced the launch of Qwen3-32B, one of the most advanced open-weight models in the world, now available on the Cerebras Inference Platform. Developed by Alibaba, Qwen3-32B rivals the performance of leading closed models like GPT-4.1 and DeepSeek R1—and now, for the first time, it runs on Cerebras with real-time responsiveness. Qwen3-32B on Cerebras performs sophisticated reasoning and returns the answer in just 1.2 seconds — up to 60x faster than comparable reasoning models such as DeepSeek R1 and OpenAI o3. This is the first reasoning model on any hardware to achieve real-time reasoning. Qwen3-32B on Cerebras is the fastest reasoning model API in the world, ready to power production-grade agents, copilots, and automation workloads. 'This is the first time a world-class reasoning model—on par with DeepSeek R1 and OpenAI's o-series — can return answers instantly,' said Andrew Feldman, CEO and co-founder of Cerebras. 'It's not just fast for a big model. It's fast enough to reshape how real-time AI gets built.' The First Real-Time Reasoning Model Reasoning models are widely recognized as the most powerful class of large language models—capable of multi-step logic, tool use, and structured decision-making. But until now, they've come with a tradeoff: latency. Inference often takes 30–90 seconds, making them impractical for responsive user experiences. Cerebras eliminates that bottleneck. Qwen3-32B delivers first-token latency in just one second, and completes full reasoning chains in real time. This is the only solution on the market today that combines high intelligence with real-time speed—and it's available now. Transparent, Scalable Pricing Qwen3-32B is available on Cerebras with simple, production-ready pricing: $0.40 per million input tokens $0.80 per million output tokens This is 10x cheaper than GPT-4.1, while offering comparable or better performance. All developers receive 1 million free tokens per day, with no waitlist. Qwen3-32B is fully open-weight and Apache 2.0 licensed, and can be integrated in seconds using standard OpenAI- or Claude-compatible endpoints. Qwen3-32B is live now on For teams seeking to build fast, intelligent, production-ready AI systems, it is the most powerful open model you can use today. About Cerebras Systems Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world's largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit or follow us on LinkedIn or X.