logo
#

Latest news with #AIBlueprints

HPE & NVIDIA unveil next-gen AI factory solutions for enterprise
HPE & NVIDIA unveil next-gen AI factory solutions for enterprise

Techday NZ

time18 hours ago

  • Business
  • Techday NZ

HPE & NVIDIA unveil next-gen AI factory solutions for enterprise

HPE has announced a suite of new AI factory solutions developed in partnership with NVIDIA, aimed at accelerating the adoption of artificial intelligence technologies across enterprises, service providers, government entities and AI developers worldwide. Expanding AI capabilities The company is expanding its NVIDIA AI Computing by HPE portfolio to incorporate systems built on NVIDIA Blackwell GPUs. These include composable and modular solutions designed for diverse customers, such as sovereign bodies and model builders, alongside the next generation of HPE Private Cloud AI—a turnkey AI factory for enterprises. These offerings are intended to simplify the transition to AI-driven operations by removing the complexity involved in assembling AI-optimised technology infrastructure. Antonio Neri, HPE's President and Chief Executive Officer, emphasised the important role that infrastructure plays in enabling AI adoption. In his words: "Generative, agentic and physical AI have the potential to transform global productivity and create lasting societal change, but AI is only as good as the infrastructure and data behind it." "Organisations need the data, intelligence and vision to capture the AI opportunity and this makes getting the right IT foundation essential. HPE and NVIDIA are delivering the most comprehensive approach, joining industry-leading AI infrastructure and services to enable organisations to realise their ambitions and deliver sustainable business value." NVIDIA Founder and CEO, Jensen Huang, commented on the partnership, stating: "We are entering a new industrial era — one defined by the ability to generate intelligence at scale. Together, HPE and NVIDIA are delivering full-stack AI factory infrastructure to drive this transformation, empowering enterprises to harness their data and accelerate innovation with unprecedented speed and precision." Next-generation Private Cloud AI The refreshed HPE Private Cloud AI incorporates NVIDIA Blackwell accelerated computing within the HPE ProLiant Compute Gen12 servers, which have demonstrated top performance in 23 AI benchmarking tests. New features include secure enclaves, post-quantum cryptography, and a trusted supply chain. The system is designed to provide seamless scalability across GPU generations and supports varied workloads, including agentic and physical AI use cases. The Private Cloud AI offering now features federated architecture, air-gapped management for organisations needing enhanced data privacy, multi-tenancy for team collaboration, and access to the latest NVIDIA AI Blueprints. A "try and buy" programme allows potential customers to test the solution in Equinix's network of high-performance data centres prior to purchase. Broader AI factory portfolio HPE's new AI factory solutions integrate its established technologies, such as liquid cooling and HPE Morpheus Enterprise Software, into validated, end-to-end modular systems. These composable solutions are configured before delivery, reducing deployment time for customers. HPE OpsRamp Software is used to provide full-stack observability and is validated for use with the NVIDIA Enterprise AI Factory. Additional offers include scaled-up AI factories for service providers and model builders, enhanced solutions for sovereign governments with air-gapped management, and dedicated services supporting operational and data sovereignty. Through adoption of the NVIDIA Enterprise AI Factory validated design, these HPE solutions can be deployed with the latest advancements in accelerated computing, Ethernet networking, and enterprise AI software, ensuring robust performance and scalability. AI-ready infrastructure The newly announced HPE Compute XD690 supports eight NVIDIA Blackwell Ultra GPUs and provides advanced systems management and alerting across extensive AI environments. The HPE Alletra Storage MP X10000, supporting Model Context Protocol (MCP), delivers AI-ready unstructured data to applications and supports the NVIDIA AI Data Platform reference design for efficient data handling. Expanding use cases and partnerships HPE's "Unleash AI" ecosystem has grown to include 26 new partners, supporting more than 75 use cases such as agentic AI, smart cities, manufacturing, data governance, video analytics, and cybersecurity. In collaboration with Accenture, HPE is also developing agentic AI solutions focused on financial services and procurement, using the Accenture AI Refinery platform on HPE Private Cloud AI. Accenture's solution is currently being used within HPE's own finance organisation, aimed at exploring applications in category and sourcing strategies, spend management, strategic relationship analysis and contract obligation management. New service initiatives HPE has introduced new services that cover all stages of AI factory development, including design, financing, deployment, education, management, and ongoing support. The company's financial services arm provides flexible payment schemes for AI projects, including reduced initial payments for Private Cloud AI and programmes that leverage existing technological assets to fund AI investments. GreenLake Intelligence advances hybrid IT Alongside AI factory developments, HPE unveiled GreenLake Intelligence—a framework that incorporates agentic AI capabilities for hybrid IT operations. GreenLake Intelligence aims to simplify hybrid IT management by delivering agentic AIOps across storage, networking, compute and cloud resources. The GreenLake Copilot will serve as an interface for managing IT environments with AI agents that operate with context and real-time reasoning. Antonio Neri outlined the company's strategy for hybrid IT, saying: "HPE is reimagining hybrid IT as only we can do, catapulting organisations from the era of hybrid complexity to the era of agentic-AI-powered cloud operations. HPE's new vision for hybrid IT is fueled by agentic intelligence at every layer of infrastructure, so enterprises can realise their boldest ambitions and achieve previously impossible levels of IT operations performance and efficiency." Agentic automation and sustainability The new agentic mesh technology in HPE Aruba Networking Central uses network-specific reasoning agents to analyse and remediate network and security issues. The expanded OpsRamp operations copilot adds agentic automation for IT operations, enabling capabilities such as root-cause analysis, explainability, and capacity planning across infrastructure domains. HPE Alletra Storage MP X10000 is previewing native support for Model Context Protocol servers, further automating data management. Additionally, GreenLake Intelligence includes services for FinOps, sustainability forecasting, workload planning, and proactive infrastructure optimisation to address cost and carbon footprint concerns. Cloud operations and modernisation The HPE CloudOps Software suite unifies automation, orchestration, governance, and cyber resiliency across multi-cloud and multi-vendor environments. HPE supports customers with enhanced services for adoption and provides tools such as CloudPhysics Plus for infrastructure and workload assessment. The new HPE Cloud Commit programme gives customers flexible purchasing models with potential discounts on long-term investments. HPE Financial Services supports customers with zero percent financing on select software solutions and savings for storage portfolio purchases, further offering lifecycle services for responsible, sustainable technology decommissioning.

Oracle And NVIDIA Collaborate On Agentic AI
Oracle And NVIDIA Collaborate On Agentic AI

Channel Post MEA

time20-03-2025

  • Business
  • Channel Post MEA

Oracle And NVIDIA Collaborate On Agentic AI

Oracle and NVIDIA have announced an integration between NVIDIA accelerated computing and inference software with Oracle's AI infrastructure, and generative AI services, to help organizations globally speed creation of agentic AI applications. The new integration between Oracle Cloud Infrastructure (OCI) and the NVIDIA AI Enterprise software platform will make 160+ AI tools and 100+ NVIDIA NIM microservices natively available through the OCI Console. In addition, Oracle and NVIDIA are collaborating on the no-code deployment of both Oracle and NVIDIA AI Blueprints and on accelerating AI vector search in Oracle Database 23ai with the NVIDIA cuVS library. 'Oracle has become the platform of choice for both AI training and inferencing, and this partnership enhances our ability to help customers achieve greater innovation and business results,' said Safra Catz, CEO of Oracle. 'NVIDIA's offerings, paired with OCI's flexibility, scalability, performance and security, will speed AI adoption and help customers get more value from their data.' 'Oracle and NVIDIA are perfect partners for the age of reasoning — an AI and accelerated computing company working with a key player in processing much of the world's enterprise data,' said Jensen Huang, founder and CEO of NVIDIA. 'Together, we help enterprises innovate with agentic AI to deliver amazing things for their customers and partners.' Purpose-Built Solutions to Meet Enterprise AI Needs Reducing the time it takes to deploy reasoning models, NVIDIA AI Enterprise will be natively available through the OCI Console, enabling customers to quickly and easily access AI tools including NVIDIA NIM — a set of 100+ optimized, cloud-native inference microservices for leading AI models, including the latest NVIDIA Llama Nemotron models for advanced AI reasoning. NVIDIA AI Enterprise will be available as a deployment image for OCI bare-metal instances and Kubernetes clusters using OCI Kubernetes Engine. OCI Console customers benefit from direct billing and customer support through Oracle. Organizations can deploy OCI's 150+ AI and cloud services with NVIDIA accelerated computing and NVIDIA AI Enterprise in the data center, the public cloud or at the edge. This offering provides an integrated AI stack to help address data privacy, sovereign AI and low-latency requirements. Biotechnology company Soley Therapeutics is deploying OCI AI Infrastructure, NVIDIA AI Enterprise and NVIDIA Blackwell GPUs to build its AI drug discovery platform to unlock possible treatments for complex diseases by capturing, decoding and interpreting cellular language to forecast cell fate. 'We believe in the potential of AI in developing new solutions that can help deliver treatments for cancer and other complex diseases,' said Yerem Yeghiazarians, cofounder and CEO of Soley Therapeutics. 'The combination of OCI and NVIDIA delivers a full-stack AI solution, providing us the storage, compute, software tools and support necessary to innovate faster with petabytes of data in developing our AI drug discovery platform.' AI Deployment at Scale With Tailored Blueprints OCI AI Blueprints provide no-code deployment recipes that enable customers to quickly run AI workloads without having to make decisions about the software stack or manually provision the infrastructure. The blueprints offer clear hardware recommendations for NVIDIA GPUs, NIM microservices and prepackaged observability tools, helping enterprises accelerate their AI projects from weeks to minutes. In addition, NVIDIA Blueprints provide developers with a unified experience across the NVIDIA stack, providing reference workflows for enterprise AI use cases. Using NVIDIA Blueprints, organizations can build and operationalize custom AI applications with NVIDIA AI Enterprise and NVIDIA Omniverse software, application programming interfaces and microservices. For example, developers can begin with an NVIDIA AI Blueprint for a customer service AI assistant and customize it for their own use. To simplify the development, deployment and scale-out of advanced physical AI and simulation applications and workflows, the NVIDIA Omniverse platform and NVIDIA Isaac Sim development workstations and Omniverse Kit App Streaming are expected to be available on Oracle Cloud Infrastructure Marketplace later this year, preconfigured with compute bare-metal instances accelerated by NVIDIA L40S GPUs. Pipefy, an AI-powered automation platform for business process management, uses an inference blueprint for document preprocessing and image processing. 'We embraced OCI AI Blueprints to spin up NVIDIA GPU nodes and deploy multimodal large language models quickly for document- and image-processing use cases,' said Gabriel Custodio, principal software engineer at Pipefy. 'Using these prepackaged and verified blueprints, deploying our AI models on OCI is now fully automated and significantly faster.' Real-Time AI Inference With NVIDIA NIM in OCI Data Science To further accelerate enterprise AI adoption and help enable quick AI deployments with minimal setup, data scientists can access pre-optimized NVIDIA NIM microservices directly in OCI Data Science. This supports real-time AI inference use cases without the complexity of managing infrastructure. To help maintain data security and compliance, the models run in the customer's OCI tenancy. Customers can purchase the models through a flexible, pay-as-you-go, hourly pricing model or apply their Oracle Universal Credits. Organizations can use this integration to deploy inference endpoints with preconfigured, optimized NIM inference engines in minutes, rapidly accelerating time to value for use cases such as AI-powered assistants, real-time recommendation engines and copilots. In addition, this allows customers to start using the integration for smaller workloads and seamlessly scale to enterprise-wide deployments. NVIDIA Accelerated Computing Platform Turbocharges AI Vector Search in Oracle Database 23ai Oracle and NVIDIA are working together to accelerate the creation of vector embeddings and vector indexes — compute-intensive portions of AI Vector Search workloads in Oracle Database 23ai — using NVIDIA GPUs and NVIDIA cuVS. Organizations can enable vector embedding through bulk vectorization of large volumes of input data such as text, images and videos, as well as the fast creation and maintenance of vector indexes. With NVIDIA-accelerated AI Vector Search, Oracle Database customers can significantly improve the performance of their AI pipelines to help support high-volume AI vector workloads. DeweyVision provides advanced computer vision and artificial intelligence capabilities to turn media into data, making it accessible, searchable, discoverable, retrievable and actionable. DeweyVision uses Oracle Database 23ai on Oracle Autonomous Database for its AI-powered, no-code warehousing tools. These tools enable production professionals to connect their workflows and edit video footage quickly by cataloging footage in minutes and providing intuitive search capabilities. 'Oracle Database 23ai with AI Vector Search can significantly increase Dewey's search performance while increasing the scalability of the DeweyVision platform,' said Majid Bemanian, CEO of DeweyVision. 'Using NVIDIA GPUs to create the vector embeddings that we load into Oracle Database accelerates our platform's ingestion of new data, while Autonomous Database and the converged capabilities of Oracle Database 23ai will help reduce our operational costs as we grow and open new opportunities. We believe that the combination of DeweyVision, Oracle Database 23ai and NVIDIA GPUs running in OCI will help us achieve our goal of becoming Hollywood's data warehouse.' NVIDIA Blackwell on OCI Enables AI Anywhere Oracle and NVIDIA continue to evolve AI infrastructure with new NVIDIA GPU types across OCI's public regions, government clouds, sovereign clouds, OCI Dedicated Region, Oracle Alloy, OCI Compute Cloud@Customer and OCI Roving Edge Devices. This includes NVIDIA Quantum-2 InfiniBand cluster network environments, NVIDIA Spectrum Ethernet switches and optimized NVIDIA NVLink and NVLink Switch functionality for some of the largest AI superclusters in the market. OCI will offer NVIDIA GB200 NVL72 systems on OCI Supercluster — generally available soon with up to 131,072 NVIDIA GPUs — and is taking orders for one of the largest AI supercomputers in the cloud with NVIDIA Blackwell Ultra GPUs. OCI will be among the first cloud service providers to offer the next generation of the NVIDIA Blackwell accelerated computing platform. Built on the groundbreaking Blackwell architecture introduced a year ago, Blackwell Ultra includes the NVIDIA GB300 NVL72 rack-scale solution and the NVIDIA HGX B300 NVL16 system. The GB300 NVL72 delivers 1.5x more AI performance than the NVIDIA GB200 NVL72, as well as increases Blackwell's revenue opportunity by 50x for AI factories, compared with those built with NVIDIA Hopper. SoundHound, a global leader in conversational intelligence, offers voice and conversational AI solutions, powering voice-related experiences in millions of products from global brands. Its voice AI platform runs on OCI, processing billions of queries annually, and uses NVIDIA GPUs to provide customers with fast and accurate voice services. 'SoundHound has developed a long-term relationship with OCI, and we believe our ongoing collaboration will play a key role in supporting future growth,' said James Hom, chief product officer of SoundHound AI. 'NVIDIA GPUs will greatly accelerate training for our next generation of voice AI.' 0 0

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store