Latest news with #Qwen3-32B
Yahoo
15-05-2025
- Business
- Yahoo
Cerebras Launches Qwen3-32B: Real-Time Reasoning with One of the World's Most Powerful Open Models
SUNNYVALE, Calif., May 15, 2025--(BUSINESS WIRE)--Cerebras today announced the launch of Qwen3-32B, one of the most advanced open-weight models in the world, now available on the Cerebras Inference Platform. Developed by Alibaba, Qwen3-32B rivals the performance of leading closed models like GPT-4.1 and DeepSeek R1—and now, for the first time, it runs on Cerebras with real-time responsiveness. Qwen3-32B on Cerebras performs sophisticated reasoning and returns the answer in just 1.2 seconds — up to 60x faster than comparable reasoning models such as DeepSeek R1 and OpenAI o3. This is the first reasoning model on any hardware to achieve real-time reasoning. Qwen3-32B on Cerebras is the fastest reasoning model API in the world, ready to power production-grade agents, copilots, and automation workloads. "This is the first time a world-class reasoning model—on par with DeepSeek R1 and OpenAI's o-series — can return answers instantly," said Andrew Feldman, CEO and co-founder of Cerebras. "It's not just fast for a big model. It's fast enough to reshape how real-time AI gets built." The First Real-Time Reasoning Model Reasoning models are widely recognized as the most powerful class of large language models—capable of multi-step logic, tool use, and structured decision-making. But until now, they've come with a tradeoff: latency. Inference often takes 30–90 seconds, making them impractical for responsive user experiences. Cerebras eliminates that bottleneck. Qwen3-32B delivers first-token latency in just one second, and completes full reasoning chains in real time. This is the only solution on the market today that combines high intelligence with real-time speed—and it's available now. Transparent, Scalable Pricing Qwen3-32B is available on Cerebras with simple, production-ready pricing: $0.40 per million input tokens $0.80 per million output tokens This is 10x cheaper than GPT-4.1, while offering comparable or better performance. All developers receive 1 million free tokens per day, with no waitlist. Qwen3-32B is fully open-weight and Apache 2.0 licensed, and can be integrated in seconds using standard OpenAI- or Claude-compatible endpoints. Qwen3-32B is live now on For teams seeking to build fast, intelligent, production-ready AI systems, it is the most powerful open model you can use today. About Cerebras Systems Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world's largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit or follow us on LinkedIn or X. View source version on Contacts Media Contact Press Contact: PR@ Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Business Wire
15-05-2025
- Business
- Business Wire
Cerebras Launches Qwen3-32B: Real-Time Reasoning with One of the World's Most Powerful Open Models
SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras today announced the launch of Qwen3-32B, one of the most advanced open-weight models in the world, now available on the Cerebras Inference Platform. Developed by Alibaba, Qwen3-32B rivals the performance of leading closed models like GPT-4.1 and DeepSeek R1—and now, for the first time, it runs on Cerebras with real-time responsiveness. Qwen3-32B on Cerebras performs sophisticated reasoning and returns the answer in just 1.2 seconds — up to 60x faster than comparable reasoning models such as DeepSeek R1 and OpenAI o3. This is the first reasoning model on any hardware to achieve real-time reasoning. Qwen3-32B on Cerebras is the fastest reasoning model API in the world, ready to power production-grade agents, copilots, and automation workloads. 'This is the first time a world-class reasoning model—on par with DeepSeek R1 and OpenAI's o-series — can return answers instantly,' said Andrew Feldman, CEO and co-founder of Cerebras. 'It's not just fast for a big model. It's fast enough to reshape how real-time AI gets built.' The First Real-Time Reasoning Model Reasoning models are widely recognized as the most powerful class of large language models—capable of multi-step logic, tool use, and structured decision-making. But until now, they've come with a tradeoff: latency. Inference often takes 30–90 seconds, making them impractical for responsive user experiences. Cerebras eliminates that bottleneck. Qwen3-32B delivers first-token latency in just one second, and completes full reasoning chains in real time. This is the only solution on the market today that combines high intelligence with real-time speed—and it's available now. Transparent, Scalable Pricing Qwen3-32B is available on Cerebras with simple, production-ready pricing: $0.40 per million input tokens $0.80 per million output tokens This is 10x cheaper than GPT-4.1, while offering comparable or better performance. All developers receive 1 million free tokens per day, with no waitlist. Qwen3-32B is fully open-weight and Apache 2.0 licensed, and can be integrated in seconds using standard OpenAI- or Claude-compatible endpoints. Qwen3-32B is live now on For teams seeking to build fast, intelligent, production-ready AI systems, it is the most powerful open model you can use today. About Cerebras Systems Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world's largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit or follow us on LinkedIn or X.
Yahoo
28-04-2025
- Business
- Yahoo
Alibaba unveils Qwen 3, a family of 'hybrid' AI reasoning models
Chinese tech company Alibaba on Monday released Qwen 3, a family of AI models the company claims matches and in some cases outperforms the best models available from Google and OpenAI. Most of the models are — or soon will be — available for download under an "open" license from AI dev platform Hugging Face and GitHub. They range in size from 0.6 billion parameters to 235 billion parameters. Parameters roughly correspond to a model's problem-solving skills, and models with more parameters generally perform better than those with fewer parameters. The rise of China-originated model series like Qwen have increased the pressure on American labs such as OpenAI to deliver more capable AI technologies. They've also led policymakers to implement restrictions aimed at limiting the ability of Chinese AI companies to obtain the chips necessary to train models. According to Alibaba, Qwen 3 models are "hybrid" models in the sense that they can take time and "reason" through complex problems or answer simpler requests quickly. Reasoning enables the models to effectively fact-check themselves, similar to models like OpenAI's o3, but at the cost of higher latency. "We have seamlessly integrated thinking and non-thinking modes, offering users the flexibility to control the thinking budget," wrote the Qwen team in a blog post. The Qwen 3 models support 119 languages, Alibaba says, and were trained on a data set of nearly 36 trillion tokens. Tokens are the raw bits of data that the model processes; 1 million tokens is equivalent to about 750,000 words. Alibaba says Qwen 3 was trained on a combination of textbooks, "question-answer pairs," code snippets, and more. These improvements, along with others, greatly boosted Qwen 3's performance compared to its predecessor, Qwen 2, says Alibaba. On Codeforces, a platform for programming contests, the largest Qwen 3 model — Qwen-3-235B-A22B — beats out OpenAI's o3-mini. Qwen-3-235B-A22B also bests o3-mini on the latest version of AIME, a challenging math benchmark, and BFCL, a test for assessing a model's ability to "reason" about problems. But Qwen-3-235B-A22B isn't publicly available — at least not yet. The largest public Qwen 3 model, Qwen3-32B, is still competitive with a number of proprietary and open AI models, including Chinese AI lab DeepSeek's R1. Qwen3-32B surpasses OpenAI's o1 model on several tests, including an accuracy benchmark called LiveBench. Alibaba says Qwen 3 "excels" in tool-calling capabilities as well as following instructions and copying specific data formats. In addition to releasing models for download, Qwen 3 is available from cloud providers including Fireworks AI and Hyperbolic. Tuhin Srivastava, co-founder and CEO of AI cloud host Baseten, said that Qwen 3 is another point in the trend line of open models keeping pace with closed-source systems such as OpenAI's. "The U.S. is doubling down on restricting sales of chips to China and purchases from China, but models like Qwen 3 that are state-of-the-art and open [...] will undoubtedly be used domestically," he told TechCrunch in a statement. "It reflects the reality that businesses are both building their own tools [as well as] buying off the shelf via closed-model companies like Anthropic and OpenAI." This article originally appeared on TechCrunch at