
Juniper Networks introduces purpose-built solution for GPUaaS and AIaaS providers
Juniper Networks, a leader in secure, AI-Native Networking, recently announced a new solution purpose-built for neocloud, traditional SPs and other AI cloud providers who are deploying and managing GPUaaS (GPU-as-a-Service) and AIaaS (AI-as-a-Service) offerings.
The new solution accelerates time-to-market, simplifies operations, and lowers the cost of managing these services, all while delivering high-performance data centre networks that are optimised for multi-tenant environments. This is the latest Networks for AI solution introduced by Juniper, who is experiencing strong momentum in the AI data centre space as companies worldwide seek for industry-leading switching, routing, security and automated operations.
Enterprises are turning to neocloud, traditional SPs and other AI cloud providers for on-demand AI services to reduce costs and improve time-to-market. These companies build optimised infrastructure to deliver better performance and offer more competitive pricing, particularly by enabling a pay-as-you-go model for expensive hardware like GPUs, making AI development more accessible. Such providers are typically more agile and responsive, offering cutting-edge AI technologies with greater flexibility and customisation. Some also focus on data sovereignty and privacy, catering to organisations with strict regulatory requirements.
Juniper's solution provides a powerful and secure way to deploy highly optimized, cost-effective and multi-tenant cloud-based AI services. It delivers high-performing, scalable and secure networks for cloud-based AI services such as training and inference with Retrieval Augmented Generation (RAG). It is quick-to-deploy and easy-to-manage, providing time-to-market advantages to AI cloud providers. The solution today consists of QFX Series Switches, PTX Series Routers and SRX Series Firewalls managed via Juniper Apstra® data centre assurance software and Mist AI™.
The solution delivers these key benefits:
Automation accelerates deployment and simplifies operations
By leveraging a combination of intent-based networking and Mist AI, Juniper simplifies Day 0/1/2+ operations and reduces ongoing costs by up to 85 percent in some instances, with up to 10x reduction in deployment time. Through an integration with Red Hat® OpenShift®, even more automation benefits can be achieved in Kubernetes environments.
Zero Trust Security and Multi-tenancy protects users and data
Juniper's Zero Trust DC Security portfolio, along with EVPN VXLAN capabilities within Juniper switches, provide multi-tenancy capabilities, and protects the AI infrastructure, models and confidential data from internal and external threats. Juniper's SRX 4700 next-generation firewall is a power-efficient, 1 U device designed for service providers, cloud providers and large enterprises. It delivers the industry's highest firewall throughput per rack unit, up to 1.4 Tbps, and supports 400 Gbps interfaces with wire speed MACsec to safeguard data in motion.
Validated solutions provide confidence
With Juniper's Ops4AI Lab, managed providers can validate their AI models prior to actual deployment across various networking, compute and storage platforms and configurations. In addition, Juniper Validated Designs (JVDs) exist for multi-vendor AI blueprints, including NVIDIA and AMD accelerated computing, Weka and VAST storage. These tools ensure confidence and expedite GPUaaS and AIaaS deployment times.
Open flexibility avoids vendor lock-in
Juniper has the only multi-vendor solution for DC fabric management and automation, making it the most flexible to design and simplest to manage. Juniper also has unique capabilities to optimize AI workload performance over Ethernet, allowing providers to use open and proven technologies and products that avoid vendor lock-in. With a runway to 1.6 Tbps/port switches and multi-vendor support for GPU-agnostic systems, Juniper reduces costs, assures faster innovation, maximizes design flexibility and avoids supply chain challenges.
Continued momentum in AI DC
As a leader in AI-Native Networking and AI technologies, Juniper's early bet on 'Networks for AI' is paying off with strong market success. First to ship 800G Ethernet switches among OEMs, Juniper now commands a leading 49 percent share of the 800G OEM market, according to 650 Group's Q1-Q3 2024 revenue shipment report published in December 2024. With AI-optimised solutions, Open4AI Lab and white-glove service to customers, Juniper has been winning many AI DC customers. Juniper AI data centre solutions are powering networks from the world's largest ACP, with hundreds of thousands of GPUs, to enterprise deployments.
Image Credit: Juniper Networks
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Web Release
4 days ago
- Web Release
Red Hat Powers Modern Virtualization on Microsoft Azure
Red Hat, the world's leading provider of open source solutions, announced the public preview of Red Hat OpenShift Virtualization on Microsoft Azure Red Hat OpenShift. Available as a self-managed operator included in Azure Red Hat OpenShift, Red Hat OpenShift Virtualization offers organizations an accelerated path to modernization by streamlining the migration of virtual machines (VMs) from existing virtualization platforms to a scalable, cloud-native platform. Azure Red Hat OpenShift is a turnkey application platform that is jointly managed and supported by Red Hat and Microsoft, designed to help to reduce the complexities associated with managing the underlying infrastructure and empower IT teams to focus time and resources on innovation and modernization rather than routine maintenance. With Red Hat OpenShift Virtualization on Azure Red Hat OpenShift, organizations benefit from a more consistent hybrid cloud stack that can support VMs and containers alike to help significantly streamline application modernization and accelerate cloud-native strategies. This enables organizations to more effectively modernize existing critical VM infrastructure while continuing to evolve with new innovations that meet future business needs. Red Hat OpenShift Virtualization on Azure Red Hat OpenShift empowers organizations to: Accelerate VM migration: Red Hat OpenShift Virtualization on Azure Red Hat OpenShift helps organizations quickly migrate and scale existing VM workloads with built-in migration tooling and automation capabilities, such as those through Red Hat Ansible Automation Platform and Red Hat Advanced Cluster Management , to simplify the migration process, minimize disruption and enable teams to quickly shift to modern infrastructure. Red Hat OpenShift Virtualization on Azure Red Hat OpenShift helps organizations quickly migrate and scale existing VM workloads with built-in and automation capabilities, such as those through and , to simplify the migration process, minimize disruption and enable teams to quickly shift to modern infrastructure. Simplify operations: Red Hat OpenShift Virtualization on Azure Red Hat OpenShift provides a unified view of operations to more seamlessly manage both VMs and containers on the same platform across the hybrid cloud. Automated deployment and management of Red Hat OpenShift clusters further reduces complexity and risk. Red Hat OpenShift Virtualization on Azure Red Hat OpenShift provides a unified view of operations to more seamlessly manage both VMs and containers on the same platform across the hybrid cloud. Automated deployment and management of Red Hat OpenShift clusters further reduces complexity and risk. Modernize infrastructure: Red Hat OpenShift Virtualization on Azure Red Hat OpenShift allows customers to build, modernize and deploy applications at scale. By adopting this Kubernetes-based platform, organizations instantly land on modern infrastructure that brings them two steps closer to their cloud-native application modernization goals. Azure Red Hat OpenShift brings modern application development processes and tools to VMs that help expedite the modernization of VM-based applications. Red Hat OpenShift Virtualization on Azure Red Hat OpenShift allows customers to build, modernize and deploy applications at scale. By adopting this Kubernetes-based platform, organizations instantly land on modern infrastructure that brings them two steps closer to their cloud-native application modernization goals. Azure Red Hat OpenShift brings modern application development processes and tools to VMs that help expedite the modernization of VM-based applications. Optimize resources: In addition to increasing DevOps productivity and decreasing the time to deploy applications with Azure Red Hat Openshift, further optimization can be realized with Red Hat OpenShift Virtualization by right-sizing VMs to better match workload needs. Built on the industry's leading hybrid cloud application platform powered by Kubernetes and Microsoft Azure's trusted cloud infrastructure, Azure Red Hat OpenShift delivers a future-ready platform with integrated security tooling, automation and management capabilities to extend innovation across the hybrid cloud. Red Hat OpenShift Virtualization is now available in public preview as a self-managed operator on Azure Red Hat OpenShift. Organizations can also apply their Microsoft Azure Consumption Commitment (MACC) and utilize the Azure Migration and Modernization Program (AMMP) for Azure Red Hat OpenShift. Additionally, customers can use the Azure Hybrid Benefit to reuse existing on-premise licenses for both Red Hat Enterprise Linux and Windows Licenses. Red Hat Summit To listen to Red Hat Summit keynotes from Red Hat executives, customers and partners: Supporting Quotes Chris Wright, senior vice president and chief technology officer, Red Hat 'As organizations continue to modernize and move away from legacy virtualization solutions, it is critical to choose a secure computing foundation for the future that can adapt to their current and evolving multi-infrastructure environments. Building upon our extensive history of collaboration and joint engineering efforts with Microsoft Azure, Red Hat OpenShift Virtualization running on Azure Red Hat OpenShift delivers more consistent orchestration for VMs and containers alike, setting organizations on a clear path to modern application development and deployment.' Brendan Burns, corporate vice president, Azure Compute, Microsoft 'As customers modernize and move their apps from traditional virtual-machine-based fabrics that are on-premises to modern Kubernetes platforms, some components still need to run on traditional virtual machines for a while. To address this, Microsoft and Red Hat are collaborating to bring open-source innovation from the KubeVirt project into Azure Red Hat OpenShift. What I'm most excited about is how this enables customers to add virtualization capabilities to Azure Red Hat OpenShift, which allows them to modernize at their own pace, and get the best return on investment as they transition to the cloud.' Additional Resources Connect with Red Hat


Channel Post MEA
26-05-2025
- Channel Post MEA
Juniper Networks And ServiceNow Extend Partnership
Juniper Networks has extended its partnership with ServiceNow to integrate Juniper's AI-native networking platform, Mist, with ServiceNow Telecom Service Management, Inventory and Sales & Order Management for Telecom. The combined solution brings end-to-end network and service automation together with AI-native, cloud-native and 100 percent API-driven wired, wireless and SD-WAN solutions to drive business transformation and deliver unparalleled efficiencies and cost savings to enterprises and MSPs. This integration is the first of its kind for the solution. After deploying the solution's Provisioning & Deployment capabilities as a key element of its Secure Networking for Enterprise managed services portfolio, Deutsche Telekom (DT) has accelerated the time-to-value for customers in leveraging the benefits of proactive operations with a DT-Juniper Mist enterprise full stack offering. By extending the benefit of Juniper Mist AI™ and automation throughout the entire NetOps (network operations) lifecycle, Juniper and ServiceNow deliver lasting value to DT and its customers through this tight integration. By this deep integration and automation, DT has created a USP among managed service providers. The certified integration combines Juniper's AI-native insight that comes from Juniper Mist's wired, wireless, and WAN Assurance products with ServiceNow's automated onboarding, service management, sales & order management, asset visibility, auto-ticketing and resolution capabilities, allowing joint customers to facilitate a variety of Day 0, 1 and 2+ capabilities. The solution delivers enhanced network deployment efficiencies, fewer network disruptions and optimized operational costs. Thanks to the completely open and programmable nature of both solutions, the two companies were able to deliver and validate these capabilities quickly and efficiently, with tight integration between the platforms. ServiceNow has leveraged Juniper Mist™ to modernize its network infrastructure with full-stack wired, wireless and SD-WAN solutions, enabling seamless client-to-cloud visibility, zero-touch provisioning, and enhanced automation. This transformation has resulted in a 60 percent reduction in network costs, a 90 percent reduction in wireless issues and a 50 percent faster network deployment, significantly improving operational efficiency and employee experience. Juniper has been a registered Build partner with ServiceNow since 2023. 0 0


Web Release
25-05-2025
- Web Release
Red Hat Unlocks Generative AI for Any Model and Any Accelerator Across the Hybrid Cloud with Red Hat AI Inference Server
Red Hat Unlocks Generative AI for Any Model and Any Accelerator Across the Hybrid Cloud with Red Hat AI Inference Server Red Hat, the world's leading provider of open source solutions, announced Red Hat AI Inference Server, a significant step towards democratizing generative AI (gen AI) across the hybrid cloud. A new offering within Red Hat AI, the enterprise-grade inference server is born from the powerful vLLM community project and enhanced by Red Hat's integration of Neural Magic technologies, offering greater speed, accelerator-efficiency and cost-effectiveness to help deliver Red Hat's vision of running any gen AI model on any AI accelerator in any cloud environment. Whether deployed standalone or as an integrated component of Red Hat Enterprise Linux AI (RHEL AI) and Red Hat OpenShift AI, this breakthrough platform empowers organizations to more confidently deploy and scale gen AI in production. Inference is the critical execution engine of AI, where pre-trained models translate data into real-world impact. It's the pivotal point of user interaction, demanding swift and accurate responses. As gen AI models explode in complexity and production deployments scale, inference can become a significant bottleneck, devouring hardware resources and threatening to cripple responsiveness and inflate operational costs. Robust inference servers are no longer a luxury, but a necessity for unlocking the true potential of AI at scale, navigating underlying complexities with greater ease. Red Hat directly addresses these challenges with Red Hat AI Inference Server — an open inference solution engineered for high performance and equipped with leading model compression and optimization tools. This innovation empowers organizations to fully tap into the transformative power of gen AI by delivering dramatically more responsive user experiences and unparalleled freedom in their choice of AI accelerators, models and IT environments. LLM: Extending inference innovation Red Hat AI Inference Server builds on the industry-leading vLLM project, which was started by University of California, Berkeley in mid-2023. The community project delivers high-throughput gen AI inference, support for large input context, multi-GPU model acceleration, support for continuous batching and more. LLM's broad support for publicly available models – coupled with its day zero integration of leading frontier models including DeepSeek, Gemma, Llama, Llama Nemotron, Mistral, Phi and others, as well as open, enterprise-grade reasoning models like Llama Nemotron – positions it as a de facto standard for future AI inference innovation. Leading frontier model providers are increasingly embracing vLLM, solidifying its critical role in shaping gen AI's future. Introducing Red Hat AI Inference Server Red Hat AI Inference Server packages the leading innovation of vLLM and forges it into the enterprise-grade capabilities of Red Hat AI Inference Server. Red Hat AI Inference Server is available as a standalone containerized offering or as part of both RHEL AI and Red Hat OpenShift AI. Across any deployment environment, Red Hat AI Inference Server provides users with a hardened, supported distribution of vLLM, along with: Intelligent LLM compression tools for dramatically reducing the size of both foundational and fine-tuned AI models, minimizing compute consumption while preserving and potentially enhancing model accuracy. Optimized model repository , hosted in the Red Hat AI organization on Hugging Face , offers instant access to a validated and optimized collection of leading AI models ready for inference deployment, helping to accelerate efficiency by 2-4x without compromising model accuracy. Red Hat's enterprise support and decades of expertise in bringing community projects to production environments. Third-party support for even greater deployment flexibility, enabling Red Hat AI Inference Server to be deployed on non-Red Hat Linux and Kubernetes platforms pursuant to Red Hat's third-party support policy . Red Hat's vision: Any model, any accelerator, any cloud. The future of AI must be defined by limitless opportunity, not constrained by infrastructure silos. Red Hat sees a horizon where organizations can deploy any model, on any accelerator, across any cloud, delivering an exceptional, more consistent user experience without exorbitant costs. To unlock the true potential of gen AI investments, enterprises require a universal inference platform – a standard for more seamless, high-performance AI innovation, both today and in the years to come. Just as Red Hat pioneered the open enterprise by transforming Linux into the bedrock of modern IT, the company is now poised to architect the future of AI inference. vLLM's potential is that of a linchpin for standardized gen AI inference, and Red Hat is committed to building a thriving ecosystem around not just the vLLM community but also llm-d for distributed inference at scale. The vision is clear: regardless of the AI model, the underlying accelerator or the deployment environment, Red Hat intends to make vLLM the definitive open standard for inference across the new hybrid cloud. Red Hat Summit Join the Red Hat Summit keynotes to hear the latest from Red Hat executives, customers and partners: Supporting Quotes Joe Fernandes, vice president and general manager, AI Business Unit, Red Hat 'Inference is where the real promise of gen AI is delivered, where user interactions are met with fast, accurate responses delivered by a given model, but it must be delivered in an effective and cost-efficient way. Red Hat AI Inference Server is intended to meet the demand for high-performing, responsive inference at scale while keeping resource demands low, providing a common inference layer that supports any model, running on any accelerator in any environment.' Ramine Roane, corporate vice president, AI Product Management, AMD 'In collaboration with Red Hat, AMD delivers out-of-the-box solutions to drive efficient generative AI in the enterprise. Red Hat AI Inference Server enabled on AMD Instinct™ GPUs equips organizations with enterprise-grade, community-driven AI inference capabilities backed by fully validated hardware accelerators.' Jeremy Foster, senior vice president and general manager, Cisco 'AI workloads need speed, consistency, and flexibility, which is exactly what the Red Hat AI Inference Server is designed to deliver. This innovation offers Cisco and Red Hat opportunities to continue to collaborate on new ways to make AI deployments more accessible, efficient and scalable—helping organizations prepare for what's next.' Bill Pearson, vice president, Data Center & AI Software Solutions and Ecosystem, Intel 'Intel is excited to collaborate with Red Hat to enable Red Hat AI Inference Server on Intel® Gaudi® accelerators. This integration will provide our customers with an optimized solution to streamline and scale AI inference, delivering advanced performance and efficiency for a wide range of enterprise AI applications.' John Fanelli, vice president, Enterprise Software, NVIDIA 'High-performance inference enables models and AI agents not just to answer, but to reason and adapt in real time. With open, full-stack NVIDIA accelerated computing and Red Hat AI Inference Server, developers can run efficient reasoning at scale across hybrid clouds, and deploy with confidence using Red Hat Inference Server with the new NVIDIA Enterprise AI validated design.' Additional Resources Connect with Red Hat