logo
Portworx & Red Hat launch solution to simplify VM on Kubernetes

Portworx & Red Hat launch solution to simplify VM on Kubernetes

Techday NZ20-05-2025

Portworx by Pure Storage and Red Hat have launched Portworx for KubeVirt, a software-defined storage solution designed to facilitate the deployment and management of virtual machine (VM) workloads on Kubernetes using the Red Hat OpenShift Virtualization Engine.
Portworx for KubeVirt is intended to offer enterprises a more cost-effective and simplified approach to modernising their application infrastructure by enabling the migration and operation of applications within Kubernetes environments at their own pace.
The combination of Portworx and the Red Hat OpenShift Virtualization Engine aims to optimise functionality for enterprises while reducing total cost of ownership. According to data from Portworx, customers using Red Hat OpenShift and the OpenShift Virtualization platform have achieved estimated cost savings of approximately 30% to 50% over the past year compared to their previous virtualization expenditure.
The new offering gives organisations the flexibility to continue running applications in VMs on Kubernetes, allowing them to refactor or develop new cloud-native applications when suitable for their current resources and business timelines. This reduces the need for abrupt migrations and gives businesses the ability to modernise incrementally.
Portworx for KubeVirt with Red Hat OpenShift Virtualization Engine allows customers to plan VM migrations to Kubernetes infrastructure across various deployment environments. Migration can take place on-premises, in the public cloud, at the edge, or in hybrid arrangements wherever Red Hat OpenShift is supported.
The Red Hat OpenShift Virtualization Engine is focused solely on VM workloads and is designed to offer the core virtualization capabilities of Red Hat OpenShift for simplified VM deployment, management, and scaling. Integrating Portworx aims to add enterprise data management features, simplifying VM workload migration and management on a unified platform.
Ajay Singh, Chief Product Officer at Pure Storage, said: "Red Hat is the perfect partner for our mission as more companies are looking to go cloud-native. Many companies are leveraging Portworx with OpenShift to not only migrate and manage their workloads, but also power innovative application development. We are proud of the success from this Red Hat partnership and look forward to what the next year holds."
Ashesh Badani, Senior Vice President and Chief Product Officer at Red Hat, commented: "While organizations are increasingly migrating to cloud-native containerized workloads, many still rely heavily on virtualization solutions to support their public and private cloud environments. With Red Hat OpenShift Virtualization Engine, we are able to meet organizations where they are, to modernize at a pace that best suits their business requirements, with a more cost-effective, streamlined solution. We are pleased to collaborate with Portworx to bring their expertise and data management capabilities to Red Hat OpenShift Virtualization Engine as an optimized offering, making it even easier for organizations to deploy and manage virtualized workloads."
The announcement reflects the ongoing trend of enterprises seeking ways to modernise their infrastructure without disrupting current operations, allowing use of existing virtual machines as well as adoption of cloud-native technologies.
Follow us on:
Share on:

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

VDURA unveils HyperScaleFlow v11.2 to cut AI storage costs
VDURA unveils HyperScaleFlow v11.2 to cut AI storage costs

Techday NZ

timea day ago

  • Techday NZ

VDURA unveils HyperScaleFlow v11.2 to cut AI storage costs

VDURA has introduced Version 11.2 of its Data Platform, HyperScaleFlow, which brings enhancements to support artificial intelligence and high-performance computing workloads. The latest release delivers native Kubernetes Container Storage Interface (CSI) support, comprehensive end-to-end encryption, and the launch of VDURACare Premier, a support package that combines hardware, software, and maintenance under a single contract. A preview of the V-ScaleFlow capability is also included, which manages data movement between high-performance QLC flash and high-capacity hard drives to improve efficiency for AI-scale operations and reduce costs. According to VDURA, native CSI support eases multi-tenant Kubernetes deployments by enabling persistent-volume provisioning and management without scripting. The new end-to-end encryption feature provides security for data from transfer through to storage, including tenant-specific encryption per volume. VDURACare Premier offers comprehensive support through a single contract, covering hardware, software, and services such as a ten-year no-cost replacement policy for drives and 24-hour expert assistance. The V-ScaleFlow technology, currently in preview within the software, introduces an optimised data management layer. It dynamically orchestrates placement and movement of data between QLC SSDs, such as the Pascari 128TB, and high-density hard drives exceeding 30TB each. This approach aims to reduce flash capacity requirements by more than 50 percent and cut power consumption, which the company says delivers significant cost savings for organisations building AI data pipelines. The V-ScaleFlow system tackles industry challenges associated with write-intensive AI checkpoints and long-term data storage by using V-Burst to absorb demand spikes and write data sequentially to large NVMe drives, halving the amount of flash needed. For long-tail datasets and historic artefacts, the system moves data to high-capacity hard drives, which is intended to reduce both operational expenses and energy usage per petabyte stored. The VDURACare Premier bundle addresses the complexity seen in contracts that separate hardware, software, and maintenance through a combined package with risk-free coverage across a decade. Benefits highlighted for Version 11.2 and V-ScaleFlow include seamless data movement between flash and disks, optimised storage economics that can lower total cost of ownership by up to 60 percent, sub-millisecond latency for NVMe-class performance, and streamlined Kubernetes deployment for stateful AI workloads. Ken Claffey, Chief Executive Officer of VDURA, said: "V11.2 delivers the speed, cloud-native simplicity, and security our customers expect - while V-ScaleFlow applies hyperscaler design principles, leveraging the same commodity SSDs and HDDs to enable efficient scaling and breakthrough economics." VDURA stated that Data Platform V11.2 will become generally available on new V5000 systems during the third quarter of 2025. The full release of V-ScaleFlow is anticipated in the fourth quarter of 2025. Current V5000 users will have access to upgrade to Version 11.2 through an online update process. VDURA is presenting the capabilities of its data platform at ISC 2025, alongside partners such as Phison Pascari, Seagate Mozaic, Starfish, and Cornelis Networks. The company is hosting the VDURA AI Data Challenge, an event featuring strongman Hafþór Björnsson, which will allow attendees to engage with interactive data tasks and evaluate GPU-optimised data performance. Commenting on the technology, Michael Wu, President and General Manager of Phison U.S., said: "Phison has collaborated closely with VDURA to validate V-ScaleFlow technology, enabling seamless integration of our highest-capacity QLC Pascari enterprise SSDs in the VDURA Data Platform. V-Burst optimises write-intensive AI workloads, delivering exceptional performance and endurance while driving down costs - a game-changer for HPC and AI environments." Trey Layton, Vice President of Software and Product Management for Advanced Computing at Penguin Solutions, added: "Penguin Solutions is excited to see VDURA's V11.2 release and breakthrough features that include V-ScaleFlow, native CSI support, and end-to-end encryption that advance the operational goals of our enterprise, federal, and cloud customers. These enhancements simplify persistent storage orchestration across Kubernetes environments, ensure robust security without performance tradeoffs, and unlock compelling TCO improvements for organisations scaling AI and HPC workloads. VDURA continues to deliver a platform purpose-built for the future of real-time, inference-driven infrastructure."

DuploCloud launches AI DevOps Help Desk to boost automation
DuploCloud launches AI DevOps Help Desk to boost automation

Techday NZ

timea day ago

  • Techday NZ

DuploCloud launches AI DevOps Help Desk to boost automation

DuploCloud has launched the DuploCloud AI DevOps Help Desk, described as the industry's first Agentic Help Desk where specialised DevOps agents address user requests in real time. The new platform allows DevOps engineers and IT administrators to shift their roles from writing automation scripts to creating AI agents designed to tackle a wide range of end-user needs. DuploCloud states that as artificial intelligence continues to drive rapid development and scalability within technology teams, the function of DevOps becomes critical yet increasingly challenging. Many current automation tools, the company notes, still rely heavily upon subject matter expertise, and the process of hiring skilled DevOps professionals is both difficult and costly. According to the company, its platform has already provided support to a large number of high-growth businesses, including start-ups and enterprises, through an automation platform that covers the breadth of DevOps. This includes infrastructure-as-code (IaC), Kubernetes management, cloud services, observability, security, and compliance. DuploCloud highlights that frequent collaboration with clients and management of thousands of environments naturally led to integrating AI into its operations. The resultant Agentic Help Desk, now part of DuploCloud's core platform, is aimed at enabling customers to scale more quickly, automate more processes, and free up time for other priorities. Venkat Thiruvengadam, Founder and Chief Executive Officer of DuploCloud, commented on the complexity of operating cloud infrastructure and the slow pace of automation adaptation. "Building and operating cloud infrastructure continues to grow in complexity. The pace of DevOps automation constantly lags behind the ever-changing engineering and security needs of cloud infrastructure. Meaningful developer self-service remains elusive," he said. "DuploCloud's AI DevOps Help Desk represents a strategic leap forward in how DevOps is executed. Achieving unprecedented speed, efficiency, and reliability, fundamentally reshaping cloud operations." The traditional IT Help Desk is generally structured as a manual, asynchronous model, limited by human resources. DuploCloud's Agentic DevOps Help Desk aims to replace this with a real-time, agent-driven system where user requests are routed directly to the relevant AI agent. Through the system, users are able to state their requirements in plain language. The designated agent then responds with appropriate context, executes actions within secure permissions, and has the capacity to escalate tasks or collaborate with other agents if needed. The process also integrates human-in-the-loop elements, such as approval workflows, audit trails, screen share capabilities, and real-time user input, all embedded to provide user control without impacting efficiency. The platform is reported to include an Automation Studio, centralised around an MCP server, which provides tools compatible with various infrastructure environments including Kubernetes, public cloud providers, open telemetry, and continuous integration/continuous delivery (CICD) systems. Feedback from early adopters indicates that the AI DevOps Help Desk is being used to update infrastructure practices via containerisation and Kubernetes, address performance concerns, and execute cost optimisation procedures. Teams have reported the ability to automate up to 80% of routine DevSecOps tasks using custom agents calibrated to their specific workflows. The automation platform has reduced the time required for new application onboarding from weeks to minutes, and has halved the time to achieve and maintain industry compliance standards such as SOC2, HIPAA, and PCI. The Agentic Help Desk is currently accessible to customers through an early access programme.

Mirantis k0rdent unifies AI, VM & container workloads at scale
Mirantis k0rdent unifies AI, VM & container workloads at scale

Techday NZ

time7 days ago

  • Techday NZ

Mirantis k0rdent unifies AI, VM & container workloads at scale

Mirantis has released updates to its k0rdent platform, introducing unified management capabilities for both containerised and virtual machine (VM) workloads aimed at supporting high-performance AI pipelines, modern microservices, and legacy applications. The new k0rdent Enterprise and k0rdent Virtualization offerings utilise a Kubernetes-native model to unify the management of AI, containerised, and VM-based workloads. By providing a single control plane, Mirantis aims to simplify operational complexity and reduce the need for multiple siloed tools when handling diverse workload requirements. k0rdent's unified infrastructure management allows organisations to manage AI services, containers, and VM workloads seamlessly within one environment. The platform leverages Kubernetes orchestration to automate the provisioning, scaling, and recovery of both containers and VMs, helping deliver consistent performance at scale. The platform also offers improved resource utilisation by automating the scheduling of computing and storage resources for various workloads through dynamic allocation. According to the company, this optimisation contributes to more efficient operations and cost control across modern and traditional application environments. Organisations can benefit from faster deployment cycles as k0rdent provides declarative infrastructure and self-service templates for containers and VMs. These features are designed to reduce delays typically associated with provisioning and deployment, allowing teams to accelerate time-to-value for projects. Enhanced portability and flexibility form a key part of the platform's approach. Workloads, including AI applications and microservices, can run alongside traditional VM-based applications on public cloud, private data centres, or hybrid infrastructure, without requiring refactoring. This capability aims to support a wide range of operational strategies and application modernisation efforts. Shaun O'Meara, Chief Technology Officer at Mirantis, stated, "Organisations are navigating a complex mix of legacy systems and emerging AI demands. k0rdent Enterprise and k0rdent Virtualization are delivering a seamless path to unified, Kubernetes-native AI infrastructure, enabling faster deployment, easier compliance, and reduced risk across any public, private, hybrid, or edge environment." With the new updates, platform engineers can define, deploy, and operate Kubernetes-based infrastructure using declarative automation, GitOps workflows, and validated templates from the Mirantis ecosystem. The solution is built on k0s, an open source CNCF Sandbox Kubernetes distribution, which Mirantis says enables streamlined infrastructure management and supports digital transformation initiatives across enterprises. k0rdent Virtualization, which operates on Mirantis k0rdent Enterprise, is positioned as an alternative to VMware tools such as vSphere, ESXi, and vRealize. This is intended to facilitate enterprises seeking to modernise application portfolios or expand edge computing infrastructure, including the integration of AI and cloud-native workloads, while retaining support for legacy infrastructure. The platform supports distributed workloads running across a variety of environments. It enables platform engineering teams to manage Kubernetes clusters at scale, build tailored internal developer platforms, and maintain compliance and operational consistency. k0rdent offers composable features through declarative automation, centralised policy enforcement, and deployment templates that can be used with Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), vSphere, and OpenStack. Mirantis provides k0rdent Enterprise and k0rdent Virtualization directly and via channel partners to meet the needs of organisations managing distributed and AI-driven workloads.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store