Latest news with #HPEPrivateCloudAI


Techday NZ
20-05-2025
- Business
- Techday NZ
HPE expands NVIDIA partnership to boost AI platforms & storage
Hewlett Packard Enterprise has announced expanded integration with NVIDIA to enhance its AI portfolio and support organisations throughout the AI lifecycle. HPE detailed several updates to its AI offerings, including support for NVIDIA's recent technologies in the HPE Private Cloud AI platform, improvements to storage with the Alletra Storage MP X10000, and server and software enhancements. These integrations are aimed at speeding up the deployment of AI solutions by enterprises, service providers, and research institutions. The HPE Private Cloud AI, co-developed with NVIDIA, now incorporates feature branch model updates from NVIDIA AI Enterprise, and is aligned with the NVIDIA Enterprise AI Factory validated design. This update provides AI developers with the ability to test, validate, and optimise workloads by leveraging the full capabilities of NVIDIA's software, including frameworks and microservices for pre-trained models. Antonio Neri, President and Chief Executive Officer of HPE, said: "Our strong collaboration with NVIDIA continues to drive transformative outcomes for our shared customers. By co-engineering cutting-edge AI technologies elevated by HPE's robust solutions, we are empowering businesses to harness the full potential of these advancements throughout their organisation, no matter where they are on their AI journey. Together, we are meeting the demands of today, while paving the way for an AI-driven future." Jensen Huang, Founder and Chief Executive Officer of NVIDIA, added: "Enterprises can build the most advanced NVIDIA AI factories with HPE systems to ready their IT infrastructure for the era of generative and agentic AI. Together, NVIDIA and HPE are laying the foundation for businesses to harness intelligence as a new industrial resource that scales from the data center to the cloud and the edge." Joseph Yang, General Manager of HPC and AI for APAC and India at HPE, commented: "As AI-driven solutions continue to grow in demand across the APAC region, this deepened integration between HPE and NVIDIA will accelerate enterprises' ability to leverage AI at scale. With innovations like HPE Private Cloud AI and the Alletra Storage MP X10000, businesses in APAC will be able to seamlessly streamline AI development, from data ingestion to model training and continuous learning, all while ensuring performance, security, and efficiency." The HPE Private Cloud AI platform aims to help organisations standardise their approach to AI across different departments, reducing risk and supporting scaling from developer environments to production-ready generative AI applications. New feature branch support allows businesses to experiment with different model features while maintaining safe, multi-layered strategies through existing production branch support. HPE Alletra Storage MP X10000 now offers a software development kit (SDK) compatible with the NVIDIA AI Data Platform reference design. This SDK facilitates the integration of enterprise unstructured data directly with NVIDIA's ecosystem, supporting data ingestion, inference, training, and ongoing learning processes. The system leverages remote direct memory access (RDMA) technology to transfer data efficiently between the X10000, GPU memory, and system memory, increasing the speed and effectiveness of AI workflows. The new storage SDK enables flexible inline data processing, metadata enrichment, and data management, while also providing a modular, composable approach to scaling deployment as organisational needs evolve. This integration supports customers in unifying storage and intelligence layers for real-time data access from core to cloud environments. On the compute front, HPE ProLiant Compute DL380a Gen12 servers have ranked first in over 50 industry benchmark scenarios, including tasks such as language models GPT-J and Llama2-70B, and computer vision models ResNet50 and RetinaNet. This server will soon be available with up to ten NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, designed for intensive enterprise AI workloads such as multimodal AI inference, physical AI, and advanced design or video applications. Key features of the DL380a Gen12 include both air-cooled and direct liquid-cooled (DLC) options, advanced security with post-quantum cryptography readiness, and automated management tools for proactive system health and energy efficiency. Additional benchmark-topping servers include the HPE ProLiant Compute DL384 Gen12 with dual-socket NVIDIA GH200 NVL2 and the HPE Cray XD670 with eight NVIDIA H200 SXM GPUs, both achieving high rankings in recent benchmarks. HPE's OpsRamp Software has also been updated to support NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. The SaaS platform enables IT teams to observe AI infrastructure health and performance, automate workflows, and gain AI-supported analytics. Integration with NVIDIA's infrastructure ecosystem allows for detailed monitoring of GPU metrics and energy optimisation for distributed AI workloads. Through these enhancements, HPE and NVIDIA seek to offer organisations across different sectors the tools to manage data pipelines, model training, and AI optimisation more efficiently, supporting the adoption and scaling of AI technologies in a secure and tailored manner.


Business Wire
19-05-2025
- Business
- Business Wire
Hewlett Packard Enterprise Deepens Integration with NVIDIA on AI Factory Portfolio
HOUSTON--(BUSINESS WIRE)-- Hewlett Packard Enterprise (NYSE: HPE) announced enhancements to the portfolio of NVIDIA AI Computing by HPE solutions that support the entire AI lifecycle and meet the unique needs of enterprises, service providers, sovereigns and research & discovery organizations. These updates deepen integrations with NVIDIA AI Enterprise – expanding support for HPE Private Cloud AI with accelerated compute, launching HPE Alletra Storage MP X10000 software development kit (SDK) for NVIDIA AI Data Platform. HPE is also releasing compute and software offerings with NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPU and NVIDIA Enterprise AI Factory validated design. "Our strong collaboration with NVIDIA continues to drive transformative outcomes for our shared customers,' said Antonio Neri, president and CEO of HPE. 'By co-engineering cutting-edge AI technologies elevated by HPE's robust solutions, we are empowering businesses to harness the full potential of these advancements throughout their organization, no matter where they are on their AI journey. Together, we are meeting the demands of today, while paving the way for an AI-driven future." 'Enterprises can build the most advanced NVIDIA AI factories with HPE systems to ready their IT infrastructure for the era of generative and agentic AI,' said Jensen Huang, founder and CEO of NVIDIA. 'Together, NVIDIA and HPE are laying the foundation for businesses to harness intelligence as a new industrial resource that scales from the data center to the cloud and the edge.' HPE Private Cloud AI adds feature branch support for NVIDIA AI Enterprise HPE Private Cloud AI, a turnkey, cloud-based AI factory co-developed with NVIDIA, includes a dedicated developer solution that helps customers proliferate unified AI strategies across the business, enabling more profitable workloads and significantly reducing risk. To further aid AI developers, HPE Private Cloud AI will support feature branch model updates from NVIDIA AI Enterprise, which include AI frameworks, NVIDIA NIM microservices for pre-trained models, and SDKs. Feature branch model support will allow developers to test and validate software features and optimizations for AI workloads . In combination with existing support of production branch models that feature built-in guardrails, HPE Private Cloud AI will enable businesses of every size to build developer systems and scale to production-ready agentic and generative AI (GenAI) applications while adopting a safe, multi-layered approach across the enterprise. HPE Private Cloud AI, a full-stack solution for agentic and GenAI workloads, will support the NVIDIA Enterprise AI Factory validated design. HPE's newest storage solution supports NVIDIA AI Data Platform HPE Alletra Storage MP X10000 will introduce an SDK which works with the NVIDIA AI Data Platform reference design. Connecting HPE's newest data platform with NVIDIA's customizable reference design will offer customers accelerated performance and intelligent pipeline orchestration to enable agentic AI. A part of HPE's growing data intelligence strategy, the new X10000 SDK enables the integration of context-rich, AI-ready data directly into the NVIDIA AI ecosystem. This empowers enterprises to streamline unstructured data pipelines for ingestion, inference, training, and continuous learning across NVIDIA-accelerated infrastructure. Primary benefits of the SDK integration include: Unlocking data value through flexible inline data processing, vector indexing, metadata enrichment, and data management. Driving efficiency with remote direct memory access (RDMA) transfers between GPU memory, system memory, and the X10000 to accelerate the data path with the NVIDIA AI Data Platform. Right-sizing deployments with modular, composable building blocks of the X10000, enabling customers to scale capacity and performance independently to align with workload requirements. Customers will be able to use raw enterprise data to inform agentic AI applications and tools by seamlessly unifying storage and intelligence layers through RDMA transfers. Together, HPE is working with NVIDIA to enable a new era of real-time, intelligent data access for customers from the edge to the core to the cloud. Additional updates about this integration will be announced at HPE Discover Las Vegas 2025. Industry-leading AI server levels up with NVIDIA RTX PRO 6000 Blackwell support HPE ProLiant Compute DL380a Gen12 servers featuring NVIDIA H100 NVL, H200 NVL and L40S GPUs topped the latest round of MLPerf Inference: Datacenter v5.0 benchmarks in 10 tests, including GPT-J, Llama2-70B, ResNet50 and RetinaNet. This industry-leading AI server will soon be available with up to 10 NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, which will provide enhanced capabilities and deliver exceptional performance for enterprise AI workloads, including agentic multimodal AI inference, physical AI, model fine tuning, as well as design, graphics and video applications. Key features include: Advanced cooling options: HPE ProLiant Compute DL380a Gen12 is available in both air-cooled and direct liquid-cooled (DLC) options, supported by HPE's industry-leading liquid cooling expertise 1, to maintain optimal performance under heavy workloads. Enhanced security: HPE Integrated Lights Out (iLO) 7, embedded in the HPE ProLiant Compute Gen12 portfolio, features built-in safeguards based on Silicon Root of Trust and enables the first servers with post-quantum cryptography readiness and that meet the requirements for FIPS 140-3 Level 3 certification, a high-level cryptographic security standard. Operations management: HPE Compute Ops Management provides secure and automated lifecycle management for server environments featuring proactive alerts and predictive AI-driven insights that inform increased energy efficiency and global system health. Two additional servers topped MLPerf Inference v5.0 benchmarks, providing third-party validation of HPE's strong leadership in AI innovation, showcasing the superior capabilities of the HPE AI Factory. Together with the HPE ProLiant Compute DL380a Gen12, these systems lead in more than 50 scenarios. Highlights include: HPE ProLiant Compute DL384 Gen12 server, featuring the dual-socket NVIDIA GH200 NVL2, ranked first in four tests including Llama2-70B and Mixtral-8x7B. HPE Cray XD670 server, with 8 NVIDIA H200 SXM GPUs, achieved the top ranking in 30 different scenarios, including large language models (LLMs) and computer vision tasks. Advancing AI infrastructure with new accelerated compute optimization HPE OpsRamp Software is expanding its AI infrastructure optimization solutions to support the upcoming NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs for AI workloads. This software-as-a-service (SaaS) solution from HPE will help enterprise IT teams streamline operations as they deploy, monitor and optimize distributed AI infrastructure across hybrid environments. HPE OpsRamp enables full-stack AI workload-to-infrastructure observability, workflow automation, as well as AI-powered analytics and event management. Deep integration with NVIDIA infrastructure – including NVIDIA accelerated computing, NVIDIA BlueField, NVIDIA Quantum InfiniBand and Spectrum-X Ethernet networking and NVIDIA Base Command Manager – provide granular metrics to monitor the performance and resilience of AI infrastructure. HPE OpsRamp gives IT teams the ability to: Observe overall health and performance of AI infrastructure by monitoring GPU temperature, utilization, memory usage, power consumption, clock speeds and fan speeds. Optimize job scheduling and resources by tracking GPU and CPU utilization across the clusters. Automate responses to certain events, for example, reducing clock speed or powering down a GPU to prevent damage. Predict future resource needs and optimize resource allocation by analyzing historical performance and utilization data. Monitor power consumption and resource utilization in order optimize costs for large AI deployments. Availability HPE Private Cloud AI will add feature branch support for NVIDIA AI Enterprise by Summer. HPE Alletra Storage MP X10000 SDK and direct memory access to NVIDIA accelerated computing infrastructure will be available starting Summer 2025. HPE ProLiant Compute DL380a Gen12 with NVIDIA RTX PRO 6000 Server Edition will be available to order starting June 4, 2025. HPE OpsRamp Software will be time-to-market to support NVIDIA RTX PRO 6000 Server Edition. Additional Resources: About Hewlett Packard Enterprise Hewlett Packard Enterprise (NYSE: HPE) is a global technology leader focused on developing intelligent solutions that allow customers to capture, analyze, and act upon data seamlessly. The company innovates across networking, hybrid cloud, and AI to help customers develop new business models, engage in new ways, and increase operational performance. For more information, visit:


TECHx
24-03-2025
- Business
- TECHx
HPE and NVIDIA Unveil Enterprise AI Solutions
HPE and NVIDIA Unveil Enterprise AI Solutions for Faster AI Deployment Hewlett Packard Enterprise (HPE) and NVIDIA have launched new enterprise AI solutions under NVIDIA AI Computing by HPE. These solutions speed up generative, agentic, and physical AI deployment while enhancing performance, efficiency, and security. Businesses can now train, fine-tune, and deploy AI models more effectively. Antonio Neri, President and CEO of HPE, highlighted the need for streamlined AI solutions. He said, 'HPE and NVIDIA offer a complete AI portfolio that accelerates time-to-value, boosts productivity, and unlocks new revenue streams.' Jensen Huang, Founder and CEO of NVIDIA, emphasized AI's transformative impact. He stated, 'NVIDIA and HPE deliver the full-stack AI infrastructure enterprises need, from generative AI to robotics and digital twins.' HPE Private Cloud AI Expands with NVIDIA AI Data Platform HPE is enhancing HPE Private Cloud AI with the NVIDIA AI Data Platform, offering businesses a faster way to harness AI-driven insights. Powered by HPE GreenLake, this cloud solution integrates NVIDIA's computing, networking, software, and storage. Using NVIDIA AI-Q Blueprints and NVIDIA NIM microservices, companies can deploy AI models quickly. The HPE Private Cloud AI Developer System further simplifies AI development with built-in NVIDIA acceleration, 32TB storage, and end-to-end AI software. Optimized AI Workloads with HPE OpsRamp and NVIDIA GPUs HPE OpsRamp now includes GPU optimization for large NVIDIA computing clusters. Businesses can access these capabilities via HPE Private Cloud AI or as a standalone service. This integration ensures efficient AI workload management across hybrid environments. Advancing Agentic AI with New Solutions HPE and NVIDIA are enabling new agentic AI applications. Deloitte's Zora AI™ for Finance, built on HPE Private Cloud AI, enhances financial reporting with interactive analysis. CrewAI streamlines agentic AI development for smarter decision-making and operations. New AI Servers for Scalable Model Training NVIDIA AI Computing by HPE introduces powerful AI servers, including the NVIDIA Blackwell Ultra and NVIDIA Blackwell platforms. These solutions provide high-performance AI model training and inferencing at scale. The NVIDIA GB300 NVL72 by HPE supports trillion-parameter AI training. Meanwhile, HPE ProLiant Compute XD servers with NVIDIA HGX B300 enhance AI model development. The HPE ProLiant Compute DL384b Gen12 with NVIDIA GB200 NVL4 boosts HPC and AI workloads. Additionally, the HPE ProLiant Compute DL380a Gen12 with NVIDIA RTX™ PRO 6000 Blackwell excels in AI inferencing and visual computing. Secure, Scalable AI Infrastructure HPE's ProLiant Gen12 servers offer top-tier security with HPE iLO technology and Secure Enclave processors. These are the first to feature post-quantum cryptography with FIPS 140-3 Level 3 certification. To support AI and HPC workloads, HPE introduces modular AI Mod POD data centers. With up to 1.5MW capacity per module, this solution ensures faster AI deployment with advanced cooling technology. Availability and Release Timeline HPE Private Cloud AI Developer System: Q2 2025 HPE Data Fabric: Q3 2025 NVIDIA GB300 NVL72 by HPE & HPE ProLiant Compute XD: H2 2025 HPE ProLiant Compute DL384b Gen12 with NVIDIA GB200 NVL4: Q4 2025 HPE ProLiant Compute DL380a Gen12 with NVIDIA RTX™ PRO 6000: Q3 2025 Pre-validated NVIDIA Blueprints: Q2 2025 HPE OpsRamp GPU Optimization & AI Mod POD: Available now HPE and NVIDIA continue to push AI innovation, providing enterprises with the tools to scale AI workloads efficiently. Let me know if you need further refinements!


Zawya
24-03-2025
- Business
- Zawya
Hewlett Packard Enterprise introduces new enterprise AI solutions with NVIDIA
NVIDIA AI Computing by HPE unveils new enterprise AI solutions for training, tuning and inferencing with improved performance, security and power efficiency Dubai, United Arab Emirates – NVIDIA GTC 2025 - Hewlett Packard Enterprise (NYSE: HPE) and NVIDIA today announced new enterprise AI solutions with NVIDIA from NVIDIA AI Computing by HPE that accelerates the time to value for customers deploying generative, agentic and physical AI. NVIDIA AI Computing by HPE unveils new enterprise AI solutions with enhanced performance, power efficiency, security, and new capabilities for a full-stack, turnkey private cloud for AI. These solutions support enterprises of all sizes in effectively training, tuning, or inferencing their AI models. "AI is delivering significant opportunity for enterprises, and requires a portfolio of streamlined and integrated solutions to support widespread adoption," said Antonio Neri, president and CEO, at HPE "To fully harness the power of AI, HPE and NVIDIA bring to market a comprehensive portfolio of AI solutions that accelerate the time to value for enterprises to enhance productivity and generate new revenue streams." 'AI is transforming every industry, and enterprises are racing to build AI factories to produce intelligence,' said Jensen Huang, founder and CEO, at NVIDIA. 'NVIDIA and HPE are delivering the full-stack infrastructure companies need to develop, train, and deploy AI at scale—from generative and agentic AI to robotics and digital twins. This collaboration accelerates the AI-driven transformation of business and unlocks new levels of productivity and innovation.' HPE Private Cloud AI delivers a turnkey approach for agentic data and applications HPE is expanding HPE Private Cloud AI with support for the new NVIDIA AI Data Platform. Together, HPE Private Cloud AI and the NVIDIA AI Data Platform will offer enterprises the fastest path to unlock the full value of their business data to fuel AI-driven actions. HPE Private Cloud AI offers a self-service cloud experience enabled by HPE GreenLake cloud. Building on the NVIDIA AI Data Platform, it transforms data into actionable intelligence through continuous data processing that leverages NVIDIA's accelerated computing, networking, AI software and enterprise storage—all of which exist today in HPE Private Cloud AI. Through continuous, co-development between HPE and NVIDIA, HPE Private Cloud AI is uniquely designed to deliver the fastest deployment of blueprints and models that underpin the NVIDIA AI Data Platform such as NVIDIA AI-Q Blueprints and NVIDIA NIM microservices for NVIDIA Llama Nemotron models with reasoning capabilities. Additional updates for HPE Private Cloud AI, include: New HPE Private Cloud AI developer system: A new developer system adds an instant AI development environment to the HPE Private Cloud AI portfolio. Powered by NVIDIA accelerated computing, it includes an integrated control node, end-to-end AI software and 32TB of integrated storage. Unified, seamless edge-to-cloud data access: HPE Data Fabric Software is the backbone of the HPE Private Cloud AI data lakehouse and a new unified data layer from HPE. HPE Data Fabric ensures AI models are consistently supplied with optimized, high-quality structured, unstructured and streaming data across hybrid cloud environments. Accelerated time to value with pre-validated NVIDIA Blueprints: HPE Private Cloud AI now supports rapid deployment of NVIDIA blueprints enabling instant productivity from NVIDIA's extensive library of agentic and physical AI applications. With pre-validated blueprints including the Multimodal PDF Data Extraction Blueprint and Digital Twins Blueprint, HPE Private Cloud AI simplifies complex AI workloads providing increased performance and faster time to value. New AI-native, full-stack observability through HPE OpsRamp HPE OpsRamp now offers GPU optimization capabilities that include observability of AI-native software stacks to deliver full-stack observability to manage the performance of training and inference workloads running on large NVIDIA accelerated computing clusters. The new GPU optimization capability is available through HPE Private Cloud AI and standalone for extending across large clusters. The new GPU optimization capabilities are also available as a new day 2 operational service for AI delivered by HPE. The offering combines HPE Complete Care Service with NVIDIA GPU optimization capabilities to enable IT operations to transform the way they manage, optimize and proactively troubleshoot AI workloads across hybrid deployments. New agentic AI use cases and services Deploying Zora AI™ by Deloitte with HPE Private Cloud AI: Deloitte's Zora AI™ for Finance on HPE Private Cloud AI is a new joint solution that will be available to customers worldwide. HPE will be the first customer to deploy the agentic AI platform that reimagines static executive reporting to become a dynamic, on-demand and interactive experience. The specific use cases include financial statement analysis, scenario modeling, and competitive and market analysis. CrewAI joins the Unleash AI program to deliver multi-agent automation: CrewAI empowers enterprises to rapidly build agentic AI solutions to drive efficiency, adaptability, and smarter decision-making across teams. Combined with HPE Private Cloud AI, the solution can securely deploy, and scale agent-driven automations tailored for specific business needs. HPE adds professional services for agentic AI: New services can identify, build and deploy agentic AI models across a variety of business processes to enhance the speed to business value by leveraging NVIDIA NIM microservices and NVIDIA NeMo, both part of the NVIDIA AI Enterprise software platform, with HPE Private Cloud AI. New HPE AI servers with NVIDIA Blackwell Ultra and Blackwell Architecture NVIDIA AI Computing by HPE delivers the latest AI servers to support the full range of AI model training, fine-tuning, and inferencing with the NVIDIA Blackwell Ultra and NVIDIA Blackwell platforms. Each of these AI servers can be deployed with NVIDIA accelerated computing, networking and NVIDIA AI Enterprise software to ensure optimal performance, efficiency, reliability, and scalability for the next era of AI. NVIDIA GB300 NVL72 by HPE will support service providers and cutting-edge enterprises to deploy large, complex AI clusters capable of training trillion parameter models - together with HPE liquid cooling expertise. NVIDIA GB300 NVL72 offers breakthrough performance with optimized compute, increased memory, and high-performance networking for AI reasoning, agentic AI, and video inference applications. HPE ProLiant Compute XD servers will support the new NVIDIA HGX B300 platform for customers looking to train, fine-tune and run large AI models for the most complex workloads, including agentic AI and test-time reasoning inference. HPE ProLiant Compute DL384b Gen12 with the NVIDIA GB200 Grace Blackwell NVL4 Superchip provides customers with revolutionary performance for converged HPC and AI workloads including scientific computing, graph neural network (GNN) training, and AI inference applications. HPE ProLiant Compute DL380a Gen12 with the new NVIDIA RTX™ PRO 6000 Blackwell Server Edition is a PCIe-based data center solution that delivers breakthrough performance for a wide range of enterprise AI inferencing and visual computing workloads. Full Lifecycle Security in the new HPE ProLiant Gen12 servers HPE iLO safeguards every phase of the server lifecycle with industry-leading silicon root of trust. The latest HPE ProLiant Compute Gen12 portfolio sets a new standard for enterprise security with an enhanced and dedicated security processor called secure enclave that establishes an unbreakable chain of trust to protect against firmware attacks and creates full line-of-sight from the factory and throughout HPE's trusted supply chain. HPE iLO 7 delivers the first server with post-quantum cryptography that meets the requirements for FIPS 140-3 Level 3 certification, a high-level cryptographic security standard. Modular and power efficient data centers for AI Five Decades of Liquid Cooling Expertise: For five decades and counting, HPE has been helping customers to address the escalating power requirements and data center density dynamics from data intensive workloads like AI to HPC. HPE has a long history of designing, building and managing complex liquid-cooled environments. This expertise has delivered eight of the top 15 supercomputers on the Green500 list, which ranks the world's most energy-efficient supercomputers in the world. Modular, performance-optimized data center for AI and HPC workloads: HPE Data Center Services - AI Mod POD is a modular, performance-optimized data center for AI and HPC workloads, which can include NVIDIA accelerated compute. This modular and cost-effective data center supports up to 1.5MW per module and can be delivered with speed to drastically reduce time to market. HPE's AI Mod POD supports HPE's AI and HPC servers and HPE Private Cloud AI. It offers HPE's patented Adaptive Cascade Cooling technology which offers a single hybrid system that supports air and 100% and hybrid liquid cooling to address the energy demands from data intensive workloads. Availability HPE Private Cloud AI developer system is expected to be generally available in the second quarter of 2025. HPE Data Fabric within HPE Private Cloud AI is expected to be generally available in the third quarter of 2025. NVIDIA GB300 NVL72 by HPE and HPE ProLiant Compute XD with NVIDIA HGX B300 are expected to be available in the second half of 2025. HPE ProLiant Compute DL384b Gen12 with NVIDIA GB200 NVL4 is expected to be generally available in the fourth quarter of 2025. HPE ProLiant Compute DL380a Gen12 with NVIDIA RTX™ PRO 6000 Blackwell Server Edition is expected to be generally available in the third quarter of 2025. Support for additional pre-validated NVIDIA Blueprints including the Multimodal PDF Data Extraction Blueprint and Digital Twins Blueprint is expected to be generally available in the second quarter of 2025. HPE OpsRamp GPU optimization and AI Mod POD are available today. About Hewlett Packard Enterprise Hewlett Packard Enterprise (NYSE: HPE) is a global technology leader focused on developing intelligent solutions that allow customers to capture, analyze, and act upon data seamlessly. The company innovates across networking, hybrid cloud, and AI to help customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: Media Contacts: Ronak Thakkar, Associate Director, FleishmanHillard


Channel Post MEA
20-03-2025
- Business
- Channel Post MEA
Hewlett Packard Enterprise Introduces New Enterprise AI Solutions with NVIDIA
Hewlett Packard Enterprise and NVIDIA today announced new enterprise AI solutions with NVIDIA from NVIDIA AI Computing by HPE that accelerate the time to value for customers deploying generative, agentic and physical AI. NVIDIA AI Computing by HPE unveils new enterprise AI solutions with enhanced performance, power efficiency, security, and new capabilities for a full-stack, turnkey private cloud for AI. These solutions support enterprises of all sizes in effectively training, tuning, or inferencing their AI models. 'AI is delivering significant opportunity for enterprises, and requires a portfolio of streamlined and integrated solutions to support widespread adoption,' said Antonio Neri, president and CEO, at HPE 'To fully harness the power of AI, HPE and NVIDIA bring to market a comprehensive portfolio of AI solutions that accelerate the time to value for enterprises to enhance productivity and generate new revenue streams.' 'AI is transforming every industry, and enterprises are racing to build AI factories to produce intelligence,' said Jensen Huang, founder and CEO, at NVIDIA. 'NVIDIA and HPE are delivering the full-stack infrastructure companies need to develop, train, and deploy AI at scale—from generative and agentic AI to robotics and digital twins. This collaboration accelerates the AI-driven transformation of business and unlocks new levels of productivity and innovation.' HPE Private Cloud AI delivers a turnkey approach for agentic data and applications HPE is expanding HPE Private Cloud AI with support for the new NVIDIA AI Data Platform . Together, HPE Private Cloud AI and the NVIDIA AI Data Platform will offer enterprises the fastest path to unlock the full value of their business data to fuel AI-driven actions. HPE Private Cloud AI offers a self-service cloud experience enabled by HPE GreenLake cloud. Building on the NVIDIA AI Data Platform, it transforms data into actionable intelligence through continuous data processing that leverages NVIDIA's accelerated computing, networking, AI software and enterprise storage—all of which exist today in HPE Private Cloud AI. Through continuous, co-development between HPE and NVIDIA, HPE Private Cloud AI is uniquely designed to deliver the fastest deployment of blueprints and models that underpin the NVIDIA AI Data Platform such as NVIDIA AI-Q Blueprints and NVIDIA NIM microservices for NVIDIA Llama Nemotron models with reasoning capabilities. Additional updates for HPE Private Cloud AI, include: New HPE Private Cloud AI developer system: A new developer system adds an instant AI development environment to the HPE Private Cloud AI portfolio. Powered by NVIDIA accelerated computing, it includes an integrated control node, end-to-end AI software and 32TB of integrated storage. A new developer system adds an instant AI development environment to the HPE Private Cloud AI portfolio. Powered by NVIDIA accelerated computing, it includes an integrated control node, end-to-end AI software and 32TB of integrated storage. Unified, seamless edge-to-cloud data access: HPE Data Fabric Software is the backbone of the HPE Private Cloud AI data lakehouse and a new unified data layer from HPE. HPE Data Fabric ensures AI models are consistently supplied with optimized, high-quality structured, unstructured and streaming data across hybrid cloud environments. HPE Data Fabric Software is the backbone of the HPE Private Cloud AI data lakehouse and a new unified data layer from HPE. HPE Data Fabric ensures AI models are consistently supplied with optimized, high-quality structured, unstructured and streaming data across hybrid cloud environments. Accelerated time to value with pre-validated NVIDIA Blueprints: HPE Private Cloud AI now supports rapid deployment of NVIDIA blueprints enabling instant productivity from NVIDIA's extensive library of agentic and physical AI applications. With pre-validated blueprints including the Multimodal PDF Data Extraction Blueprint and Digital Twins Blueprint, HPE Private Cloud AI simplifies complex AI workloads providing increased performance and faster time to value. New AI-native, full-stack observability through HPE OpsRamp HPE OpsRamp now offers GPU optimization capabilities that include observability of AI-native software stacks to deliver full-stack observability to manage the performance of training and inference workloads running on large NVIDIA accelerated computing clusters. The new GPU optimization capability is available through HPE Private Cloud AI and standalone for extending across large clusters. The new GPU optimization capabilities are also available as a new day 2 operational service for AI delivered by HPE. The offering combines HPE Complete Care Service with NVIDIA GPU optimization capabilities to enable IT operations to transform the way they manage, optimize and proactively troubleshoot AI workloads across hybrid deployments. New agentic AI use cases and services Deploying Zora AI by Deloitte with HPE Private Cloud AI: Deloitte's Zora AI for Finance on HPE Private Cloud AI is a new joint solution that will be available to customers worldwide. HPE will be the first customer to deploy the agentic AI platform that reimagines static executive reporting to become a dynamic, on-demand and interactive experience. The specific use cases include financial statement analysis, scenario modeling, and competitive and market analysis. Deloitte's Zora AI for Finance on HPE Private Cloud AI is a new joint solution that will be available to customers worldwide. HPE will be the first customer to deploy the agentic AI platform that reimagines static executive reporting to become a dynamic, on-demand and interactive experience. The specific use cases include financial statement analysis, scenario modeling, and competitive and market analysis. CrewAI joins the Unleash AI program to deliver multi-agent automation: CrewAI empowers enterprises to rapidly build agentic AI solutions to drive efficiency, adaptability, and smarter decision-making across teams. Combined with HPE Private Cloud AI, the solution can securely deploy, and scale agent-driven automations tailored for specific business needs. CrewAI empowers enterprises to rapidly build agentic AI solutions to drive efficiency, adaptability, and smarter decision-making across teams. Combined with HPE Private Cloud AI, the solution can securely deploy, and scale agent-driven automations tailored for specific business needs. HPE adds professional services for agentic AI: New services can identify, build and deploy agentic AI models across a variety of business processes to enhance the speed to business value by leveraging NVIDIA NIM microservices and NVIDIA NeMo , both part of the NVIDIA AI Enterprise software platform, with HPE Private Cloud AI. New HPE AI servers with NVIDIA Blackwell Ultra and Blackwell Architecture NVIDIA AI Computing by HPE delivers the latest AI servers to support the full range of AI model training, fine-tuning, and inferencing with the NVIDIA Blackwell Ultra and NVIDIA Blackwell platforms. Each of these AI servers can be deployed with NVIDIA accelerated computing, networking and NVIDIA AI Enterprise software to ensure optimal performance, efficiency, reliability, and scalability for the next era of AI. NVIDIA GB300 NVL72 by HPE will support service providers and cutting-edge enterprises to deploy large, complex AI clusters capable of training trillion parameter models – together with HPE liquid cooling expertise. NVIDIA GB300 NVL72 offers breakthrough performance with optimized compute, increased memory, and high-performance networking for AI reasoning, agentic AI, and video inference applications. HPE ProLiant Compute XD servers will support the new NVIDIA HGX B300 platform for customers looking to train, fine-tune and run large AI models for themost complex workloads, including agentic AI and test-time reasoning inference. HPE ProLiant Compute DL384b Gen12 with the NVIDIA GB200 Grace Blackwell NVL4 Superchip provides customers with revolutionary performance for converged HPC and AI workloads including scientific computing, graph neural network (GNN) training, and AI inference applications. HPE ProLiant Compute DL380a Gen12 with the new NVIDIA RTX PRO 6000 Blackwell Server Edition is a PCIe-based data center solution that delivers breakthrough performance for a wide range of enterprise AI inferencing and visual computing workloads. Full Lifecycle Security in the new HPE ProLiant Gen12 servers HPE iLO safeguards every phase of the server lifecycle with industry-leading silicon root of trust. The latest HPE ProLiant Compute Gen12 portfolio sets a new standard for enterprise security with an enhanced and dedicated security processor called secure enclave that establishes an unbreakable chain of trust to protect against firmware attacks and creates full line-of-sight from the factory and throughout HPE's trusted supply chain. HPE iLO 7 delivers the first server with post-quantum cryptography that meets the requirements for FIPS 140-3 Level 3 certification, a high-level cryptographic security standard. Modular and power efficient data centers for AI Five Decades of Liquid Cooling Expertise: For five decades and counting, HPE has been helping customers to address the escalating power requirements and data center density dynamics from data intensive workloads like AI to HPC. HPE has a long history of designing, building and managing complex liquid-cooled environments. This expertise has delivered eight of the top 15 supercomputers on the Green500 list, which ranks the world's most energy-efficient supercomputers in the world. For five decades and counting, HPE has been helping customers to address the escalating power requirements and data center density dynamics from data intensive workloads like AI to HPC. HPE has a long history of designing, building and managing complex liquid-cooled environments. This expertise has delivered eight of the top 15 supercomputers on the list, which ranks the world's most energy-efficient supercomputers in the world. Modular, performance-optimized data center for AI and HPC workloads: HPE Data Center Services – AI Mod POD is a modular, performance-optimized data center for AI and HPC workloads, which can include NVIDIA accelerated compute. This modular and cost-effective data center supports up to 1.5MW per module and can be delivered with speed to drastically reduce time to market. HPE's AI Mod POD supports HPE's AI and HPC servers and HPE Private Cloud AI. It offers HPE's patented Adaptive Cascade Cooling technology which offers a single hybrid system that supports air and 100% and hybrid liquid cooling to address the energy demands from data intensive workloads. 0 0