logo
Alibaba launches Qwen3, open-source AI for global developers

Alibaba launches Qwen3, open-source AI for global developers

Techday NZ05-05-2025
Alibaba has introduced Qwen3, the latest open-sourced large language model series generation.
The Qwen3 series includes six dense models and two Mixture-of-Experts (MoE) models, which aim to offer developers flexibility to build advanced applications across mobile devices, smart glasses, autonomous vehicles, and robotics.
All models in the Qwen3 family—spanning dense models with 0.6 billion to 32 billion parameters and MoE models with 30 billion (3 billion active) and 235 billion (22 billion active) parameters—are now open-sourced and accessible globally.
Qwen3 is Alibaba's first release of hybrid reasoning models. These models blend conventional large language model capabilities with more advanced and dynamic reasoning. Qwen3 can transition between "thinking mode" for complex multi-step tasks such as mathematics, coding, and logical deduction, and "non-thinking mode" for rapid, more general-purpose responses.
For developers using the Qwen3 API, the model provides control over the duration of its "thinking mode," which can extend up to 38,000 tokens. This is intended to enable a tailored balance between intelligence and computational efficiency. The Qwen3-235B-A22B MoE model is designed to lower deployment costs compared to other models in its class.
Qwen3 has been trained on a dataset comprising 36 trillion tokens, double the size of the dataset used to train its predecessor, Qwen2.5. Alibaba reports that this expanded training has improved reasoning, instruction following, tool use, and multilingual tasks.
Among Qwen3's features is support for 119 languages and dialects. The model is said to deliver high performance in translation and multilingual instruction-following.
Advanced agent integration is supported with native compatibility for the Model Context Protocol (MCP) and robust function-calling capabilities. These features place Qwen3 among open-source models targeting complex agent-based tasks.
Regarding benchmarking, Alibaba states that Qwen3 surpasses previous Qwen models—including QwQ in thinking mode and Qwen2.5 in non-thinking mode—on mathematics, coding, and logical reasoning tests.
The model also aims to provide more natural experiences in creative writing, role-playing, and multi-turn dialogue, supporting more engaging conversations.
Alibaba reports strong performance by Qwen3 models across several benchmarks, including AIME25 for mathematical reasoning, LiveCodeBench for coding proficiency, BFCL for tools and function-calling, and Arena-Hard for instruction-tuned large language models. The development of Qwen3's hybrid reasoning capacity involved a four-stage training process: long chain-of-thought cold start, reasoning-based reinforcement learning, thinking mode fusion, and general reinforcement learning.
Qwen3 models are now freely available on digital platforms including Hugging Face, Github, and ModelScope. An API is scheduled for release via Alibaba's Model Studio, the company's development platform for AI models. Qwen3 is also integrated into Alibaba's AI super assistant application, Quark.
The Qwen model family has attracted over 300 million downloads globally. Developers have produced over 100,000 derivative models based on Qwen on Hugging Face, which Alibaba claims ranks the series among the most widely adopted open-source AI models worldwide.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

NetSuite AI connector service: Have AI your way
NetSuite AI connector service: Have AI your way

Techday NZ

time4 days ago

  • Techday NZ

NetSuite AI connector service: Have AI your way

AI is creating a world of new possibilities for businesses. We see it every day in how our customers are taking advantage of the AI capabilities embedded across NetSuite. But what if you could bring your own AI and decide how it interacts with your data in NetSuite? Today I'm excited to introduce the new NetSuite AI Connector Service, a protocol-driven integration service supporting Model Context Protocol (MCP). In case you haven't heard of MCP, it is emerging as a critical standard for structured communication between large language model (LLM)-powered agents and other systems. While we call it an integration service, it's more than that - it's a foundational step in making NetSuite the most intelligent, extensible, and AI-ready ERP system. The NetSuite AI Connector Service gives customers a secure, flexible, and scalable way to connect their own AI to NetSuite. This is important because it: Enables developers to define exactly what their AI system can see and do, with full permissions and role-based access Supports multiple assistants and agent platforms in a standards-based way, allowing for a "bring your own AI" model Turns complex AI-ERP integrations into modular, reusable SuiteApps, streamlining deployment and lifecycle management Aligns AI integrations with NetSuite's existing extensibility model, eliminating the need for risky workarounds or shadow IT Allows partners and ISVs to build, package, and monetise AI-driven SuiteApps, creating a new category of intelligent ERP extensions What's more, the NetSuite AI Connector Service will enable NetSuite users to engage with NetSuite data via the user interfaces of popular AI assistants. Here are more details on why we're so excited about the new NetSuite AI Connector: Sets a new industry standard for AI-ERP integration: While most ERP vendors are adding AI as fixed, embedded features, we are taking a platform-first approach by introducing a protocol-based, extensible architecture. This allows structured, governed, and developer-defined interactions between ERP and external AI systems. It establishes a new benchmark for what enterprise AI integration should look like - secure, flexible, and open - and puts pressure on other vendors to rethink their approach. While most ERP vendors are adding AI as fixed, embedded features, we are taking a platform-first approach by introducing a protocol-based, extensible architecture. This allows structured, governed, and developer-defined interactions between ERP and external AI systems. It establishes a new benchmark for what enterprise AI integration should look like - secure, flexible, and open - and puts pressure on other vendors to rethink their approach. Opens ERP to the agent ecosystem: By enabling integration with third-party AI agent platforms, NetSuite becomes one of the first major ERP systems to support agent-based automation across systems and reinforces our position at the forefront of the agent-driven enterprise software. By enabling integration with third-party AI agent platforms, NetSuite becomes one of the first major ERP systems to support agent-based automation across systems and reinforces our position at the forefront of the agent-driven enterprise software. Empowers the SuiteCloud developer and partner ecosystem: Instead of bypassing technical teams with closed AI features, we are putting power into the hands of developers through Custom MCP Tools and the SuiteCloud MCP Server. This creates a new category of intelligent ERP extensions and a new frontier of opportunity for the NetSuite ecosystem. Instead of bypassing technical teams with closed AI features, we are putting power into the hands of developers through Custom MCP Tools and the SuiteCloud MCP Server. This creates a new category of intelligent ERP extensions and a new frontier of opportunity for the NetSuite ecosystem. Provides customers with long-term flexibility and choice: With support for bring-your-own-AI and an extensible, protocol-based design, we are ensuring our customers gain the freedom to select the AI models and platforms that best align with their evolving needs. This approach enables customers to adapt quickly as technologies advance, ensuring continuous innovation on their terms. For all these reasons and more, this is a big deal as it reflects a fundamental architectural shift. By exposing ERP data, context, and logic to external AI systems through secure, governed interfaces, we are laying the groundwork for true AI-native ERP: systems that not only automate tasks, but also collaborate with AI to reason, take action, and drive business outcomes. We will be sharing more details on NetSuite AI Connector Service at SuiteWorld taking place October 6-9 in Las Vegas. If you would like to learn more about how your business can take advantage of it today, please visit NetSuite AI Connector Service. Follow us on: Share on:

MongoDB boosts AI app reliability with new models & partners
MongoDB boosts AI app reliability with new models & partners

Techday NZ

time5 days ago

  • Techday NZ

MongoDB boosts AI app reliability with new models & partners

MongoDB has announced a series of product enhancements and AI partner ecosystem expansions aimed at enabling customers to build reliable AI applications at scale, following its acquisition of Voyage AI earlier this year. The updates allow customers to integrate Voyage AI's latest embedding and reranking models with MongoDB's database infrastructure. These models are designed to introduce context awareness and set new accuracy benchmarks at what the company says are favourable price-performance ratios. Andrew Davidson, Senior Vice President of Products at MongoDB, said, "Databases are more central than ever to the technology stack in the age of AI. Modern AI applications require a database that combines advanced capabilities - like integrated vector search and best-in-class AI models - to unlock meaningful insights from all forms of data (structure, unstructured), all while streamlining the stack. These systems also demand scalability, security, and flexibility to support production applications as they evolve and as usage grows. By consolidating the AI data stack and by building a cutting-edge AI ecosystem, we're giving developers the tools they need to build and deploy trustworthy, innovative AI solutions faster than ever before." According to the company, approximately 8,000 startups - including Laurel and Mercor - have chosen MongoDB as the foundation for their AI projects in the past 18 months. Additionally, more than 200,000 new developers register for MongoDB Atlas each month, highlighting significant adoption across the developer community. Product highlights The newly released Voyage AI models include the voyage-context-3, which enables context-aware embeddings for improved data retrieval, and general-purpose models such as voyage-3.5 and voyage-3.5-lite, which focus on delivering higher retrieval quality and price-performance. The rerank-2.5 and rerank-2.5-lite models offer instruction-following reranking to enhance results accuracy across benchmarks. Fred Roma, Senior Vice President of Engineering at MongoDB, commented, "Many organisations struggle to scale AI because the models themselves aren't up to the task. They lack the accuracy needed to delight customers, are often complex to fine-tune and integrate, and become too expensive at scale. The quality of your embedding and reranking models is often the difference between a promising prototype and an AI application that delivers meaningful results in production. That's why we've focused on building models that perform better, cost less, and are easier to use - so developers can bring their AI applications into the real world and scale adoption." MongoDB has also introduced the Model Context Protocol (MCP) Server, now in public preview. This server is designed to standardise the connection between MongoDB deployments and widely used development tools, including GitHub CoPilot, Anthropic's Claude, Cursor, and Windsurf. The aim is to provide developers with the ability to use natural language for managing database operations, thereby accelerating workflow, productivity, and deployment timelines. AI partner ecosystem As part of the expanded ecosystem, Galileo, an AI reliability and observability platform, and Temporal, an open-source Durable Execution platform, have joined MongoDB's partner network. Vikram Chatterji, CEO and co-founder at Galileo, stated, "As organisations bring AI applications and agents into production, accuracy and reliability are of paramount importance. By formally joining MongoDB's AI ecosystem, MongoDB and Galileo will now be able to better enable customers to deploy trustworthy AI applications that transform their businesses with less friction." Maxim Fateev, CTO at Temporal, said, "Building production-ready agentic AI means enabling systems to survive real-world reliability and scale challenges, consistently and without fail. Through our partnership with MongoDB, Temporal empowers developers to orchestrate durable, horizontally scalable AI systems with confidence, ensuring engineering teams build applications their customers can count on." MongoDB's partnership with LangChain is focused on streamlining AI workflows, introducing features like GraphRAG for greater transparency in data retrieval processes and natural language querying to allow agentic applications direct data interaction. These developments are designed to equip developers to build advanced retrieval-augmented generation (RAG) systems and autonomous agents capable of interacting with MongoDB data. Harrison Chase, CEO and Co-founder at LangChain, said, "As AI agents take on increasingly complex tasks, access to diverse, relevant data becomes essential. Our integrations with MongoDB, including capabilities like GraphRAG and natural language querying, equip developers with the tools they need to build and deploy complex, future-proofed agentic AI applications grounded in relevant, trustworthy data." Industry analysts have noted the increasing importance of integrated data solutions in AI development. Jason Andersen, Vice President and Principal Analyst at Moor Insights and Strategy, commented, "As more enterprises deploy and scale AI applications and agents, the demand for accurate outputs and reduced latency keeps increasing. By thoughtfully unifying the AI data stack with integrated advanced vector search and embedding capabilities in their core database platform, MongoDB is taking on these challenges while also reducing complexity for developers." These new models and expanded partnerships are positioned to address the issues of complexity, accuracy and scalability that many organisations face when implementing AI solutions.

Teleport launches Secure MCP to protect AI enterprise workflows
Teleport launches Secure MCP to protect AI enterprise workflows

Techday NZ

time07-08-2025

  • Techday NZ

Teleport launches Secure MCP to protect AI enterprise workflows

Teleport has announced the general availability of its Secure Model Context Protocol (MCP) for use on the Teleport Infrastructure Identity Platform. The Secure MCP solution seeks to address new security challenges emerging from the rapid adoption of artificial intelligence across enterprises. Recent data from Enterprise Strategy Group indicate that 44% of enterprises have now deployed AI within their organisations. Teleport's Secure MCP is designed to provide security guardrails for AI systems as they interact with databases, MCP servers, and other forms of enterprise data. The Model Context Protocol is an open standard that enables AI models to connect with various tools, databases, or applications using a simplified, universal interface. This is intended to streamline integration in a manner akin to technology standards such as USB-C for physical devices. Despite these integration benefits, MCP was not originally intended with access control, which presents risks around unrestricted data access for AI models. Consequently, there is a need for mechanisms that can provide controlled, audited, and secure access to sensitive data. Teleport's Secure MCP responds to these needs by employing its Infrastructure Identity Platform, which extends existing trust frameworks to AI-based workflows. The platform enforces both Role-Based and Attribute-Based Access Controls (RBAC and ABAC) to manage the resources that large language models (LLMs) can access. Every session involving AI data access is logged, thereby contributing to regulatory compliance and audit readiness. Ev Kontsevoy, Chief Executive Officer of Teleport, commented on the development: "AI is terraforming how software is deployed in organizations. It shouldn't require a major public security incident to motivate business leaders to prepare for this impending challenge. Applying the same access control guardrails for AI, humans, and non-human identities accelerates AI adoption while locking in the protection needed to prevent unauthorized access of data. That's why we launched our secure MCP solution for Teleport, to enable enterprises to confidently unlock AI's innovation without falling prey to its security vulnerabilities and loopholes." Industry analysts have noted a concurrent rise in deployments of AI agents that operate within core enterprise systems, increasing the urgency for businesses to address identity and data security concerns. Todd Thiemann, Principal Analyst for Identity Security & Data Security at Enterprise Strategy Group, highlighted the pressing nature of these issues: "A wave of AI agent deployments that touch on core enterprise systems is in process, and identity teams need to be prepared. Recent Enterprise Strategy Group research showed that data privacy and security for AI agents were major concerns for enterprise security teams. Teleport's Secure MCP solution lays the groundwork for secure agent deployment and enables identity teams to get ahead of the game in securing their AI agent deployments." Secure MCP delivers several key architectural components for AI and MCP deployments. These include Zero Trust Networking, allowing only authenticated clients to interact with MCP servers over encrypted connections. A live MCP server inventory feature allows administrators to discover and register MCP tools across hybrid infrastructure environments automatically. Strict access control ensures that language models are only able to access resources for which they are specifically authorised, while the principle of least privilege means that authorisations are granted on a just-in-time basis for defined tasks. This minimises the potential risk of overprivileged or persistent access by AI models. Additionally, comprehensive audit trails provide a record of every attempt - successful or denied - by LLMs to access data. The extension of these security controls to MCP allows engineering teams to develop technology that incorporates AI without opening new avenues for unauthorised access to company data. By supporting both machine and user-driven LLM workflows, Teleport states its platform is positioned to accommodate a range of AI integration scenarios while maintaining a strong security posture. Follow us on: Share on:

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store