logo
F5 Delivers Scalable And Secure Cloud-Native Network Functionality For AI And High-Bandwidth Applications

F5 Delivers Scalable And Secure Cloud-Native Network Functionality For AI And High-Bandwidth Applications

Scoop21-05-2025

Press Release – F 5
F5 has unveiled F5 BIG-IP Next Cloud-Native Network Functions (CNF) 2.0, an evolved solution that significantly enhances the capabilities of the F5 Application Delivery and Security Platform (ADSP) for large-scale cloud-native applications.
F5 (NASDAQ: FFIV), the global leader in delivering and securing every app and API, today unveiled F5 BIG-IP Next Cloud-Native Network Functions (CNF) 2.0, an evolved solution that significantly enhances the capabilities of the F5 Application Delivery and Security Platform (ADSP) for large-scale cloud-native applications. With advanced Kubernetes-native features, F5 BIG-IP Next CNF 2.0 redefines how organisations adapt to increasingly complex and resource-intensive operations caused by high-bandwidth applications such as AI by delivering scalable, resource-efficient, and secure network functionality for telecommunications service providers, internet service providers (ISPs), cloud service providers, and large enterprises.
Designed to support diverse industries—from telecommunications to cloud services—F5 BIG-IP Next CNF 2.0 helps organisations revolutionise high-bandwidth operations. Service providers can cut costs with more efficient resource allocation and scaling, mitigate modern security threats, and simplify management through Kubernetes-native automation. By integrating essential services such as DDoS protection, firewall, intrusion prevention system (IPS), and carrier-grade NAT (CGNAT), F5 BIG-IP Next CNF 2.0 empowers providers to consolidate network operations, safeguard infrastructure, and proactively scale amidst increasing traffic demands.
'Service providers and large enterprises are under pressure to scale faster, operate leaner, and stay secure—all in increasingly complex environments,' said Kunal Anand, Chief Innovation Officer at F5. 'With BIG-IP Next CNF 2.0, we're extending the F5 ADSP with a truly cloud-native solution built for modern, decentralised infrastructure. Unlike legacy virtualised approaches that burn resources, our Kubernetes-native architecture unlocks smarter scaling, stronger security, and more efficient delivery of high-bandwidth services—giving customers the flexibility to move faster without compromise.'
Raising the Bar for Cloud-Native Network Functions
Telecommunications and enterprise networks face an urgent need to balance escalating traffic volumes, tight budgets, and growing security threats—all within complex, distributed architectures. F5 BIG-IP Next CNF 2.0 directly addresses these challenges with tools that consolidate network functions, reduce resource consumption, and optimise scalability and security. Highlights of F5 BIG-IP Next CNF 2.0 include:
Disaggregation (DAG): Enables horizontal scalability for traffic steering and resource optimisation.
Accelerated DNS: Offers faster query responses and reduced latency via caching and secure zone transfers.
Policy Enforcer: Integrates traffic optimisation features like video acceleration, URL filtering, and context-aware controls.
Unified Security Services: Combines firewall, DDoS mitigation, IPS, and CGNAT for centralised management and robust protection.
Platform Enhancements: Maximises flexibility with Kubernetes-native automation and separate scaling of control and data planes.
Optimised for Large Networks Across Industries
F5 BIG-IP Next CNF 2.0 helps telecommunications providers supercharge their 4G and 5G environments with advanced traffic steering and enhanced security tailored for N6/SGi-LAN architectures. ISPs benefit from capabilities like CGNAT to mitigate IPv4 shortages while boosting performance through system disaggregation. Cloud service providers gain the edge with scalable global server load balancing (GSLB) and AI-ready DNS features, ensuring seamless digital experiences. Enterprises, on the other hand, can power IT and SecOps teams with intelligent traffic optimisation, robust DDoS defences, and simplified policy enforcement for bandwidth-intensive applications, reinforcing their operational agility and security posture.
With 33 per cent lower CPU utilisation, F5 BIG-IP Next CNF 2.0 reduces operational costs and optimises resource consumption. The solution's independent scalability—allowing separate data and control plane scaling—ensures flexibility without bottlenecks, while its edge-ready and power-efficient architecture guarantees low latency and superior user experiences. Integrated security measures protect against large-scale network attacks, and Kubernetes-native automation streamlines workflows with API-driven deployments for faster, simplified operations.
F5 BIG-IP Next CNF 2.0 consolidates services to reduce infrastructure costs by over 60 per cent. Disaggregation enables seamless scalability across CNF instances, while DNS acceleration minimises latency for end users. Advanced traffic optimisation ensures smooth performance during peak demand, empowering service providers to excel in high-bandwidth applications.
F5 BIG-IP Next CNF 2.0 + Red Hat OpenShift
This week at Red Hat Summit 2025, F5 is unveiling BIG-IP Next CNF 2.0 functionality on Red Hat OpenShift. BIG-IP Next CNF 2.0 is designed to work more seamlessly with Red Hat OpenShift, the industry's leading hybrid cloud application platform powered by Kubernetes. Red Hat OpenShift delivers a critical foundation for service providers to more effectively deploy scalable cloud-native applications on a trusted, more consistent platform. By combining Red Hat OpenShift's robust Kubernetes management capabilities with F5 BIG-IP Next CNF 2.0's powerful network functions, service providers can scale their applications more efficiently while unlocking additional value, including advanced traffic handling, optimised security, and simplified usability. Many service providers already rely on Red Hat OpenShift for modern cloud-native operations.
Visit www.f5.com to learn more about how F5 enables transformational cloud-native operations for interconnected networks.
About F5
F5, Inc. (NASDAQ: FFIV) is the global leader that delivers and secures every app. Backed by three decades of expertise, F5 has built the industry's premier platform—F5 Application Delivery and Security Platform (ADSP)—to deliver and secure every app, every API, anywhere: on-premises, in the cloud, at the edge, and across hybrid, multicloud environments. F5 is committed to innovating and partnering with the world's largest and most advanced organisations to deliver fast, available, and secure digital experiences. Together, we help each other thrive and bring a better digital world to life.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Mirantis k0rdent unifies AI, VM & container workloads at scale
Mirantis k0rdent unifies AI, VM & container workloads at scale

Techday NZ

time11 hours ago

  • Techday NZ

Mirantis k0rdent unifies AI, VM & container workloads at scale

Mirantis has released updates to its k0rdent platform, introducing unified management capabilities for both containerised and virtual machine (VM) workloads aimed at supporting high-performance AI pipelines, modern microservices, and legacy applications. The new k0rdent Enterprise and k0rdent Virtualization offerings utilise a Kubernetes-native model to unify the management of AI, containerised, and VM-based workloads. By providing a single control plane, Mirantis aims to simplify operational complexity and reduce the need for multiple siloed tools when handling diverse workload requirements. k0rdent's unified infrastructure management allows organisations to manage AI services, containers, and VM workloads seamlessly within one environment. The platform leverages Kubernetes orchestration to automate the provisioning, scaling, and recovery of both containers and VMs, helping deliver consistent performance at scale. The platform also offers improved resource utilisation by automating the scheduling of computing and storage resources for various workloads through dynamic allocation. According to the company, this optimisation contributes to more efficient operations and cost control across modern and traditional application environments. Organisations can benefit from faster deployment cycles as k0rdent provides declarative infrastructure and self-service templates for containers and VMs. These features are designed to reduce delays typically associated with provisioning and deployment, allowing teams to accelerate time-to-value for projects. Enhanced portability and flexibility form a key part of the platform's approach. Workloads, including AI applications and microservices, can run alongside traditional VM-based applications on public cloud, private data centres, or hybrid infrastructure, without requiring refactoring. This capability aims to support a wide range of operational strategies and application modernisation efforts. Shaun O'Meara, Chief Technology Officer at Mirantis, stated, "Organisations are navigating a complex mix of legacy systems and emerging AI demands. k0rdent Enterprise and k0rdent Virtualization are delivering a seamless path to unified, Kubernetes-native AI infrastructure, enabling faster deployment, easier compliance, and reduced risk across any public, private, hybrid, or edge environment." With the new updates, platform engineers can define, deploy, and operate Kubernetes-based infrastructure using declarative automation, GitOps workflows, and validated templates from the Mirantis ecosystem. The solution is built on k0s, an open source CNCF Sandbox Kubernetes distribution, which Mirantis says enables streamlined infrastructure management and supports digital transformation initiatives across enterprises. k0rdent Virtualization, which operates on Mirantis k0rdent Enterprise, is positioned as an alternative to VMware tools such as vSphere, ESXi, and vRealize. This is intended to facilitate enterprises seeking to modernise application portfolios or expand edge computing infrastructure, including the integration of AI and cloud-native workloads, while retaining support for legacy infrastructure. The platform supports distributed workloads running across a variety of environments. It enables platform engineering teams to manage Kubernetes clusters at scale, build tailored internal developer platforms, and maintain compliance and operational consistency. k0rdent offers composable features through declarative automation, centralised policy enforcement, and deployment templates that can be used with Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), vSphere, and OpenStack. Mirantis provides k0rdent Enterprise and k0rdent Virtualization directly and via channel partners to meet the needs of organisations managing distributed and AI-driven workloads.

Most IT leaders struggle to prove ROI from cloud spending
Most IT leaders struggle to prove ROI from cloud spending

Techday NZ

time12 hours ago

  • Techday NZ

Most IT leaders struggle to prove ROI from cloud spending

CloudBolt Software has released a report indicating that most IT leaders struggle to demonstrate return on investment (ROI) from cloud usage, despite claiming confidence in their organisation's FinOps maturity. The report, entitled "Performance vs. Perception: The FinOps Execution Gap," was conducted in partnership with Wakefield Research. It surveyed 350 senior IT leaders across various industries in the United States to assess the current state of Financial Operations (FinOps) practices regarding cloud cost management. The research identifies a notable disconnect between perceived levels of maturity in FinOps practices and the actual operational effectiveness in managing and optimising cloud costs. Although many respondents label their FinOps approaches as mature and automated, a substantial number reveal ongoing challenges in consistently demonstrating the value generated from cloud investments. According to the findings, 78% of IT leaders admitted to difficulties in consistently showcasing cloud ROI. When asked to define ROI, respondents primarily pointed to revenue growth (43%), followed by operational efficiency and productivity (36%), and cost savings (35%). The report also found strong acknowledgement of the rising impact of Kubernetes on cloud expenditure. While 98% of participants agree that Kubernetes is becoming a significant driver of cloud spend, 91% admit they are unable to optimise their Kubernetes clusters effectively, identifying a gap in operational capabilities as container adoption increases. Many organisations report relatively high levels of automation in their cloud operations. The study reveals that 66% of respondents say their environments are mostly or fully automated for cloud waste management and spend optimisation. Despite this, 58% indicate that identifying and remediating cloud-cost waste can still take weeks or months, raising questions about the true extent of automation achieved by these organisations. Kyle Campos, Chief Technology and Product Officer at CloudBolt, stated: "FinOps as a discipline is more sound than ever and continues to evolve effectively. But a good percentage of organizations may be taking a victory lap before even navigating the first turn. Through this research, it's evident that while a majority indicate they believe they've achieved FinOps maturity, the data shows they are still in the early stages of operationalizing and optimizing FinOps practices. Confidence in lieu of measurable progress obscures reality and hinders the improvement necessary for significant business impact." When identifying barriers to optimising ROI from cloud investments, 55% of respondents cited difficulty in linking cloud expenditure directly to business outcomes. Other key challenges include organisational misalignment and operational silos (48%), as well as issues related to inefficient resource management, such as poor tagging and inconsistent accountability (44%). The report also highlights the ongoing relevance of private cloud and data centres in driving ROI, with hybrid multi-cloud management identified as the top priority for 42% of those surveyed. Over the next six to twelve months, 39% of respondents expect hybrid cloud management to be a funded priority, second only to the optimisation of artificial intelligence and machine learning workloads (AI/ML cloud-cost optimisation), which was cited by 40%. Campos added: "Leaders believe they have visibility into their cloud spend. Yet without necessary governance, enforcement, and effective remediation, they are doing little to reduce the insight-to-action gap – the time it takes to go from 'we have a problem' to 'problem fixed and cost optimized.' This leads to persistent inefficiencies and inflated costs. Kubernetes and AI-driven workloads especially highlight this disconnect – rapid adoption without proper operational control and automated actions (both retrospective and proactive) is dramatically affecting return on investment. If FinOps practices are not focusing on continuous optimization and employing the capabilities to execute on that, organizations will continue to struggle to effectively show cloud ROI." The full report includes comprehensive data analysis and recommendations, addressing the existing gaps between FinOps perceptions and the realities of cloud operational performance.

Portworx & Red Hat help enterprises cut virtualisation costs
Portworx & Red Hat help enterprises cut virtualisation costs

Techday NZ

time22-05-2025

  • Techday NZ

Portworx & Red Hat help enterprises cut virtualisation costs

Portworx has introduced Portworx for KubeVirt, aiming to provide a software-defined storage solution tailored for virtual machine (VM) workloads running on Kubernetes with Red Hat OpenShift Virtualization Engine. The new offering is designed to deliver a cost-effective and lower-risk approach for enterprises transitioning their VM workloads to Kubernetes infrastructure. By integrating Portworx with the Red Hat OpenShift Virtualization Engine, companies are expected to experience optimised functionality, simplified management of VMs and containers, and reduced total cost of ownership. Customers using Portworx alongside Red Hat OpenShift Virtualization have reported cost savings ranging from 30% to 50% over the past year compared to their previous virtualisation expenses. This cost optimisation is seen as particularly significant for organisations managing extensive virtualised environments. The Portworx for KubeVirt solution is developed to address modernisation challenges faced by enterprises. It allows businesses to continue operating applications on VMs within Kubernetes while they refactor existing workloads or develop new cloud-native applications at their own pace based on available resources and transformation timelines. The offering provides application and data flexibility by supporting VM migration to Kubernetes on multiple deployment environments, including on-premises infrastructure, public cloud, edge locations, and hybrid configurations, wherever Red Hat OpenShift is supported. Red Hat OpenShift Virtualization Engine focuses on simplifying the deployment, management, and scaling of VM workloads. Its streamlined approach is combined with Portworx's enterprise-grade data management capabilities to assist customers in migrating and managing their VM applications on a unified platform. Ajay Singh, Chief Product Officer at Pure Storage, commented on the collaboration with Red Hat by stating, "Red Hat is the perfect partner for our mission as more companies are looking to go cloud-native. Many companies are leveraging Portworx with OpenShift to not only migrate and manage their workloads, but also power innovative application development. We are proud of the success from this Red Hat partnership and look forward to what the next year holds." Ashesh Badani, Senior Vice President and Chief Product Officer at Red Hat, addressed the ongoing reliance of many organisations on virtualisation solutions, recalling, "While organisations are increasingly migrating to cloud-native containerised workloads, many still rely heavily on virtualisation solutions to support their public and private cloud environments. With Red Hat OpenShift Virtualization Engine, we are able to meet organisations where they are, to modernise at a pace that best suits their business requirements, with a more cost-effective, streamlined solution." "We are pleased to collaborate with Portworx to bring their expertise and data management capabilities to Red Hat OpenShift Virtualization Engine as an optimised offering, making it even easier for organisations to deploy and manage virtualised workloads." The combined capability of Portworx for KubeVirt with Red Hat OpenShift Virtualization Engine is aimed at allowing enterprises to bridge the gap between legacy VM operation and cloud-native application development. By offering flexibility and extending support for VM migration across diverse infrastructure setups, this partnership addresses both technical and financial objectives for organisations looking to modernise their virtualisation strategies.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store