logo
Alibaba Introduces Qwen3, Setting New Benchmark in Open-Source AI with Hybrid Reasoning - Middle East Business News and Information

Alibaba Introduces Qwen3, Setting New Benchmark in Open-Source AI with Hybrid Reasoning - Middle East Business News and Information

Mid East Info30-04-2025

April 2025 – Alibaba has launched Qwen3, the latest generation of its open-sourced large language model (LLM) family, setting a new benchmark for AI innovation.
The Qwen3 series features six dense models and two Mixture-of-Experts (MoE) models, offering developers flexibility to build next-generation applications across mobile devices, smart glasses, autonomous vehicles, robotics and beyond.
All Qwen3 models – including dense models (0.6B, 1.7B, 4B, 8B, 14B, and 32B parameters) and MoE models (30B with 3B active, and 235B with 22B active) – are now open sourced and available globally.
Hybrid Reasoning Combining Thinking and Non-thinking Modes
Qwen3 marks Alibaba's debut of hybrid reasoning models, combining traditional LLM capabilities with advanced, dynamic reasoning. Qwen3 models can seamlessly switch between thinking mode for complex, multi-step tasks such as mathematics, coding, and logical deduction and non-thinking mode for fast, general-purpose responses.
For developers accessing Qwen3 through API, the model offers granular control over thinking duration (up to 38K tokens), enabling an optimized balance between intelligent performance and compute efficiency. Notably, the Qwen3-235B-A22B MoE model significantly lowers deployment costs compared to other state-of-the-art models, reinforcing Alibaba's commitment to accessible, high-performance AI.
Breakthroughs in Multilingual Skills, Agent Capabilities, Reasoning and Human Alignment
Trained on a massive dataset of 36 trillion tokens – double that of its predecessor Qwen2.5 — Qwen3 delivers significant advancement on reasoning, instruction following, tool use and multilingual tasks.
Key capabilities include: Multilingual Mastery: Supports 119 languages and dialects, with leading performance in translation and multilingual instruction-following.
Advanced Agent Integration: Natively supports the Model Context Protocol (MCP) and robust function-calling, leading open-source models in complex agent-based tasks.
Superior Reasoning: Surpasses previous Qwen models (QwQ in thinking mode and Qwen2.5 in non-thinking mode) in mathematics, coding, and logical reasoning benchmarks.
Enhanced Human Alignment: Delivers more natural creative writing, role-playing, and multi-turn dialogue experiences for more natural, engaging conversations.
Qwen3 models achieve top-tier results across industry benchmarks
Thanks to advancements in model architecture, increase in training data, and more effective training methods, Qwen3 models achieve top-tier results across industry benchmarks such as AIME25 (mathematical reasoning), LiveCodeBench (coding proficiency), BFCL (tool and function-calling capabilities), and Arena-Hard (benchmark for instruction-tuned LLMs). Additionally, to develop the hybrid reasoning model, a four-stage training process was implemented, which includes long chain-of-thought (CoT) cold start, reasoning-based reinforcement learning (RL), thinking mode fusion, and general RL.
Open Access to Drive Innovation:
Qwen3 models are now freely available for download on Hugging Face, Github, and ModelScope, and can be explored on chat.qwen.ai. API access will soon be available through Alibaba's AI model development platform Model Studio. Qwen3 also powers Alibaba's flagship AI super assistant application, Quark.
Since its debut, the Qwen model family has attracted over 300 million downloads worldwide. Developers have created more than 100,000 Qwen-based derivative models on Hugging Face, making Qwen one of the world's most widely adopted open-source AI model series.
About Alibaba Group:
Alibaba Group's mission is to make it easy to do business anywhere. The company aims to build the future infrastructure of commerce. It envisions that its customers will meet, work and live at Alibaba, and that it will be a good company that lasts for 102 years.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

China's DeepSeek releases update to AI model that sent US shares tumbling earlier this year
China's DeepSeek releases update to AI model that sent US shares tumbling earlier this year

Egypt Independent

time29-05-2025

  • Egypt Independent

China's DeepSeek releases update to AI model that sent US shares tumbling earlier this year

Shanghai Reuters — Chinese artificial intelligence startup DeepSeek released an update to its R1 reasoning model in the early hours of Thursday, stepping up competition with US rivals such as OpenAI. DeepSeek launched R1-0528 on developer platform Hugging Face, but has yet to make an official public announcement. It did not publish a description of the model or comparisons. But the LiveCodeBench leaderboard, a benchmark developed by researchers from UC Berkeley, MIT, and Cornell, ranked DeepSeek's updated R1 reasoning model just slightly behind OpenAI's o4 mini and o3 reasoning models on code generation and ahead of xAI's Grok 3 mini and Alibaba's Qwen 3. Bloomberg earlier reported the update on Wedneday. It said that a DeepSeek representative had told a WeChat group that it had completed what it described as a 'minor trial upgrade' and that users could start testing it. DeepSeek earlier this year upended beliefs that US export controls were holding back China's AI advancements after the startup released AI models that were on a par with or better than industry-leading models in the United States at a fraction of the cost. The launch of R1 in January sent tech shares outside China plummeting in January and challenged the view that scaling AI requires vast computing power and investment. Since R1's release, Chinese tech giants like Alibaba and Tencent have released models claiming to surpass DeepSeek's. Google's Gemini has introduced discounted tiers of access while OpenAI cut prices and released an o3 mini model that relies on less computing power. The company is still widely expected to release R2, a successor to R1. Reuters reported in March, citing sources, that R2's release was initially planned for May. DeepSeek also released an upgrade to its V3 large language model in March.

Red Hat Optimizes Red Hat AI to Speed Enterprise AI Deployments Across Models, AI Accelerators and Clouds - Middle East Business News and Information
Red Hat Optimizes Red Hat AI to Speed Enterprise AI Deployments Across Models, AI Accelerators and Clouds - Middle East Business News and Information

Mid East Info

time22-05-2025

  • Mid East Info

Red Hat Optimizes Red Hat AI to Speed Enterprise AI Deployments Across Models, AI Accelerators and Clouds - Middle East Business News and Information

Red Hat AI Inference Server, validated models and integration of Llama Stack and Model Context Protocol help users deliver higher-performing, more consistent AI applications and agents Red Hat, the world's leading provider of open source solutions, today continues to deliver customer choice in enterprise AI with the introduction of Red Hat AI Inference Server, Red Hat AI third-party validated models and the integration of Llama Stack and Model Context Protocol (MCP) APIs, along with significant updates across the Red Hat AI portfolio. With these developments, Red Hat intends to further advance the capabilities organizations need to accelerate AI adoption while providing greater customer choice and confidence in generative AI (gen AI) production deployments across the hybrid cloud. According to Forrester, open source software will be the spark for accelerating enterprise AI efforts.1 As the AI landscape grows more complex and dynamic, Red Hat AI Inference Server and third party validated models provide efficient model inference and a tested collection of AI models optimized for performance on the Red Hat AI platform. Coupled with the integration of new APIs for gen AI agent development, including Llama Stack and MCP, Red Hat is working to tackle deployment complexity, empowering IT leaders, data scientists and developers to accelerate AI initiatives with greater control and efficiency. Efficient inference across the hybrid cloud with Red Hat AI Inference Server: The Red Hat AI portfolio now includes the new Red Hat AI Inference Server, providing faster, more consistent and cost-effective inference at scale across hybrid cloud environments. This key addition is integrated into the latest releases of Red Hat OpenShift AI and Red Hat Enterprise Linux AI, and is also available as a standalone offering, enabling organizations to deploy intelligent applications with greater efficiency, flexibility and performance. Tested and optimized models with Red Hat AI third party validated models Red Hat AI third party validated models, available on Hugging Face, make it easier for enterprises to find the right models for their specific needs. Red Hat AI offers a collection of validated models, as well as deployment guidance to enhance customer confidence in model performance and outcome reproducibility. Select models are also optimized by Red Hat, leveraging model compression techniques to reduce size and increase inference speed, helping to minimize resource consumption and operating costs. Additionally, the ongoing model validation process helps Red Hat AI customers continue to stay at the forefront of optimized gen AI innovation. Standardized APIs for AI application and agent development with Llama Stack and MCP Red Hat AI is integrating Llama Stack, initially developed by Meta, along with Anthropic's MCP, to provide users with standardized APIs for building and deploying AI applications and agents. Currently available in developer preview in Red Hat AI, Llama Stack provides a unified API to access inference with vLLM, retrieval-augmented generation (RAG), model evaluation, guardrails and agents, across any gen AI model. MCP enables models to integrate with external tools by providing a standardized interface for connecting APIs, plugins and data sources in agent workflows. The latest release of Red Hat OpenShift AI (v2.20) delivers additional enhancements for building, training, deploying and monitoring both gen AI and predictive AI models at scale. These include: Optimized model catalog (technology preview) provides easy access to validated Red Hat and third party models, enables the deployment of these models on Red Hat OpenShift AI clusters through the web console interface and manages the lifecycle of those models leveraging Red Hat OpenShift AI's integrated registry. Distributed training through the KubeFlow Training Operator enables the scheduling and execution of InstructLab model tuning and other PyTorch-based training and tuning workloads, distributed across multiple Red Hat OpenShift nodes and GPUs and includes distributed RDMA networking–acceleration and optimized GPU utilization to reduce costs. Feature store (technology preview), based on the upstream Kubeflow Feast project, provides a centralized repository for managing and serving data for both model training and inference, streamlining data workflows to improve model accuracy and reusability. Red Hat Enterprise Linux AI 1.5 brings new updates to Red Hat's foundation model platform for developing, testing and running large language models (LLMs). Key features in version 1.5 include: Google Cloud Marketplace availability, expanding the customer choice for running Red Hat Enterprise Linux AI in public cloud environments–along with AWS and Azure–to help simplify the deployment and management of AI workloads on Google Cloud. Enhanced multi-language capabilities for Spanish, German, French and Italian via InstructLab, allowing for model customization using native scripts and unlocking new possibilities for multilingual AI applications. Users can also bring their own teacher models for greater control over model customization and testing for specific use cases and languages, with future support planned for Japanese, Hindi and Korean. The Red Hat AI InstructLab on IBM Cloud service is also now generally available. This new cloud service further streamlines the model customization process, improving scalability and user experience, empowering enterprises to make use of their unique data with greater ease and control. Red Hat's vision: Any model, any accelerator, any cloud. The future of AI must be defined by limitless opportunity, not constrained by infrastructure silos. Red Hat sees a horizon where organizations can deploy any model, on any accelerator, across any cloud, delivering an exceptional, more consistent user experience without exorbitant costs. To unlock the true potential of gen AI investments, enterprises require a universal inference platform–a standard for more seamless, high-performance AI innovation, both today and in the years to come. Red Hat Summit: Join the Red Hat Summit keynotes to hear the latest from Red Hat executives, customers and partners: Modernized infrastructure meets enterprise-ready AI — Tuesday, May 20, 8-10 a.m. EDT (YouTube) Hybrid cloud evolves to deliver enterprise innovation — Wednesday, May 21, 8-9:30 a.m. EDT (YouTube) Supporting Quotes: Joe Fernandes, vice president and general manager, AI Business Unit, Red Hat 'Faster, more efficient inference is emerging as the newest decision point for gen AI innovation. Red Hat AI, with enhanced inference capabilities through Red Hat AI Inference Server and a new collection of validated third-party models, helps equip organizations to deploy intelligent applications where they need to, how they need to and with the components that best meet their unique needs.' Michele Rosen, research manager, IDC 'Organizations are moving beyond initial AI explorations and are focused on practical deployments. The key to their continued success lies in the ability to be adaptable with their AI strategies to fit various environments and needs. The future of AI not only demands powerful models, but models that can be deployed with ability and cost-effectiveness. Enterprises seeking to scale their AI initiatives and deliver business value will find this flexibility absolutely essential.' About Red Hat: Red Hat is the open hybrid cloud technology leader, delivering a trusted, consistent and comprehensive foundation for transformative IT innovation and AI applications. Its portfolio of cloud, developer, AI, Linux, automation and application platform technologies enables any application, anywhere—from the datacenter to the edge. As the world's leading provider of enterprise open source software solutions, Red Hat invests in open ecosystems and communities to solve tomorrow's IT challenges. Collaborating with partners and customers, Red Hat helps them build, connect, automate, secure and manage their IT environments, supported by consulting services and award-winning training and certification offerings. Forward-Looking Statements: Except for the historical information and discussions contained herein, statements contained in this press release may constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are based on the company's current assumptions regarding future business and financial performance. These statements involve a number of risks, uncertainties and other factors that could cause actual results to differ materially. Any forward-looking statement in this press release speaks only as of the date on which it is made. Except as required by law, the company assumes no obligation to update or revise any forward-looking statements.

Alibaba Introduces Open-Source Model for Video Creation and Editing - Middle East Business News and Information
Alibaba Introduces Open-Source Model for Video Creation and Editing - Middle East Business News and Information

Mid East Info

time15-05-2025

  • Mid East Info

Alibaba Introduces Open-Source Model for Video Creation and Editing - Middle East Business News and Information

All-in-one AI model, Wan2.1-VACE, designed to transform the video creation industry Alibaba has unveiled Wan 2.1-VACE (Video All-in-one Creation and Editing), its latest open-source model for video creation and editing. This innovative tool combines multiple video processing functions into a single model, to streamline the video creation process, boosting efficiency and productivity. As part of Alibaba's video generation large model – the Wan2.1 series – VACE is the first open-source model in the industry to provide a unified solution for various video generation and editing tasks. Wan2.1-VACE supports video generation with multi-modal inputs spanning text, image, and video while offering creators comprehensive video editing capabilities. These editing features include referencing images or frames, video repainting, modifying selected areas of the video and spatio-temporal extension, all of which enable the flexible combination of various tasks to enhance creativity. With this advanced tool, users can generate video containing specific interacting subjects based on image samples and bring static images to life by adding natural movement effects. They can also enjoy advanced video repainting functions such as pose transfer, motion control, depth control, and recolorization. The model also supports adding, modification or deletion to selective specific areas of a video without affecting the surroundings. It also allows for the extension of video boundaries while intelligently filling in content to enrich the visual experience. As an all-in-one AI model, Wan2.1-VACE delivers unparalleled versatility, enabling users to seamlessly combine multiple functions and unlock innovative potential. Users can turn a static image into video while controlling the movement of objects by specifying the motion trajectory. They can seamlessly replace characters or objects with specified references, animate referenced characters, control poses, and expand a vertical image horizontally to create a horizontal video while adding new elements through referencing. Innovative Technologies: Wan2.1-VACE leverages several innovative technologies, to take into account the needs of different video editing tasks during construction and design. Its unified interface, called Video Condition Unit (VCU), supports unified processing of multimodal inputs such as text, images, video, and masks. The model employs a Context Adapter structure that injects various task concepts using formalized representations of temporal and spatial dimensions. This innovative design enables it to flexibly manage a wide range of video synthesis tasks. Thanks to advancements in model architecture, Wan2.1-VACE can be widely applied in the rapid production of social media short videos, content creation for advertising and marketing, post-production and special effects processing in film and television, and for educational training video generation. Training video foundation models requires immense computing resources and vast amounts of high-quality training data. Open access helps lower the barrier for more businesses to leverage AI, enabling them to create high-quality visual content tailored to their needs, quickly and cost-effectively. Alibaba is open-sourcing the Wan2.1-VACE model in two versions; a 14-billion(B)-parameter and a 1.3-billion(B)-parameter. The models are available to download for free on Hugging Face and GitHub, as well as Alibaba Cloud's open-source community, ModelScope. As one of the earliest major global tech companies to open source its self-developed large-scale AI models, Alibaba open sourced four Wan2.1 models in February 2025 and, last month, a video generation model that supports video creation with start and end frames. To date, the models have attracted over 3.3 million downloads on Hugging Face and ModelScope. About Alibaba Group: Alibaba Group's mission is to make it easy to do business anywhere. The company aims to build the future infrastructure of commerce. It envisions that its customers will meet, work and live at Alibaba, and that it will be a good company that lasts for 102 years.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store