Latest news with #KubeVirt


Techday NZ
19-06-2025
- Business
- Techday NZ
Mirantis unveils architecture to speed & secure AI deployment
Mirantis has released a comprehensive reference architecture to support IT infrastructure for AI workloads, aiming to assist enterprises in deploying AI systems quickly and securely. The Mirantis AI Factory Reference Architecture is based on the company's k0rdent AI platform and designed to offer a composable, scalable, and secure environment for artificial intelligence and machine learning (ML) workloads. According to Mirantis, the solution provides criteria for building, operating, and optimising AI and ML infrastructure at scale, and can be operational within days of hardware installation. The architecture leverages templated and declarative approaches provided by k0rdent AI, which Mirantis claims enables rapid provisioning of required resources. This, the company states, leads to accelerated prototyping, model iteration, and deployment—thereby shortening the overall AI development cycle. The platform features curated integrations, accessible via the k0rdent Catalog, for various AI and ML tools, observability frameworks, continuous integration and delivery, and security, all while adhering to open standards. Mirantis is positioning the reference architecture as a response to rising demand for specialised compute resources, such as GPUs and CPUs, crucial for the execution of complex AI models. "We've built and shared the reference architecture to help enterprises and service providers efficiently deploy and manage large-scale multi-tenant sovereign infrastructure solutions for AI and ML workloads," said Shaun O'Meara, chief technology officer, Mirantis. "This is in response to the significant increase in the need for specialized resources (GPU and CPU) to run AI models while providing a good user experience for developers and data scientists who don't want to learn infrastructure." The architecture addresses several high-performance computing challenges, including Remote Direct Memory Access (RDMA) networking, GPU allocation and slicing, advanced scheduling, performance tuning, and Kubernetes scaling. Additionally, it supports integration with multiple AI platform services, such as Gcore Everywhere Inference and the NVIDIA AI Enterprise software ecosystem. In contrast to typical cloud-native workloads, which are optimised for scale-out and multi-core environments, AI tasks often require the aggregation of multiple GPU servers into a single high-performance computing instance. This shift demands RDMA and ultra-high-performance networking, areas which the Mirantis reference architecture is designed to accommodate. The reference architecture uses Kubernetes and is adaptable to various AI workload types, including training, fine-tuning, and inference, across a range of environments. These include dedicated or shared servers, virtualised settings using KubeVirt or OpenStack, public cloud, hybrid or multi-cloud configurations, and edge locations. The solution addresses the specific needs of AI workloads, such as high-performance storage and high-speed networking technologies, including Ethernet, Infiniband, NVLink, NVSwitch, and CXL, to manage the movement of large data sets inherent to AI applications. Mirantis has identified and aimed to resolve several challenges in AI infrastructure, such as: Time-intensive fine-tuning and configuration compared to traditional compute systems; Support for hard multi-tenancy to ensure security, isolation, resource allocation, and contention management; Maintaining data sovereignty for data-driven AI and ML workloads, particularly where models contain proprietary information; Ensuring compliance with varied regional and regulatory standards; Managing distributed, large-scale infrastructure, which is common in edge deployments; Effective resource sharing, particularly of high-demand compute components such as GPUs; Enabling accessibility for users such as data scientists and developers who may not have specific IT infrastructure expertise. The composable nature of the Mirantis AI Factory Reference Architecture allows users to assemble infrastructure using reusable templates across compute, storage, GPU, and networking components, which can then be tailored to specific AI use cases. The architecture includes support for a variety of hardware accelerators, including products from NVIDIA, AMD, and Intel. Mirantis reports that its AI Factory Reference Architecture has been developed with the goal of supporting the unique operational requirements of enterprises seeking scalable, sovereign AI infrastructures, especially where control over data and regulatory compliance are paramount. The framework is intended as a guideline to streamline the deployment and ongoing management of these environments, offering modularity and integration with open standard tools and platforms.


TECHx
03-04-2025
- Business
- TECHx
Portworx Enterprise 3.3 Boosts VM Performance on Kubernetes
Portworx Enterprise 3.3 Boosts VM Performance on Kubernetes Portworx has announced Enterprise 3.3, delivering true VM performance at scale with unified storage management and seamless integrations. This release cements Portworx as the preferred storage and data platform for Kubernetes workloads. Built for Kubernetes, Enterprise 3.3 improves scalability, automation, and self-service. It also helps enterprises address costly virtualization challenges. With its unified platform and multiple integrations, businesses can modernize at their own pace. Companies choosing Kubernetes can cut costs by 30-50% compared to traditional alternatives. Portworx takes these savings further by eliminating the need for complex workload migrations. Instead, enterprises can keep VMs on Kubernetes indefinitely while refactoring applications or building new cloud-native solutions. Managing data storage for VMs and containers is now easier. A single workflow replaces multiple tools, while the agnostic storage approach supports both Pure Storage and non-Pure Storage environments. 'As Broadcom's acquisition of VMware reshapes the market, businesses seek cost-effective virtualization,' said Venkat Ramakrishnan, VP & GM, Portworx, Pure Storage. He noted that 81% of enterprises surveyed in 2024 plan to modernize or migrate VMs to Kubernetes, with most doing so within two years. Portworx Enterprise 3.3 introduces RWX Block for KubeVirt VMs, boosting high-performance read/write capabilities on FlashArray™ and other storage vendors. It also offers centralized data management, zero RPO disaster recovery, and file-level backup for Linux VMs. Additionally, it integrates with KubeVirt platforms from SUSE, Spectro Cloud, and KuberMatic, alongside its existing Red Hat partnership. Steven Dickens, CEO and Principal Analyst at HyperFRAME Research, called the update a game-changer. 'Portworx 3.3 offers seamless VM and container support, delivering 30-50% cost savings without compromising performance or scalability.' Mike Barrett, VP & GM, Hybrid Cloud Platforms, Red Hat, highlighted the benefits. 'Our collaboration with Portworx improves block volume performance for VMs and enhances business-critical disaster recovery with synchronous replication.' With Enterprise 3.3, Portworx delivers next-level Kubernetes storage while helping enterprises reduce costs, streamline management, and accelerate cloud-native transformation.