
Oracle & NVIDIA expand OCI partnership with 160 AI tools
The collaboration brings NVIDIA AI Enterprise, a cloud-native software platform, natively to the Oracle Cloud Infrastructure (OCI) Console. Oracle customers can now use this platform across OCI's distributed cloud, including public regions, Government Clouds, and sovereign cloud solutions.
Platform access and capabilities
By integrating NVIDIA AI Enterprise directly through the OCI Console rather than a marketplace, Oracle allows customers to utilise their existing Universal Credits, streamlining transactions and support. This approach is designed to speed up deployment and help customers meet security, regulatory, and compliance requirements in enterprise AI processes.
Customers can now access over 160 AI tools focused on training and inference, including NVIDIA NIM microservices. These services aim to simplify the deployment of generative AI models and support a broad set of application-building and data management needs across various deployment scenarios. "Oracle has become the platform of choice for AI training and inferencing, and our work with NVIDIA boosts our ability to support customers running some of the world's most demanding AI workloads," said Karan Batta, Senior Vice President, Oracle Cloud Infrastructure. "Combining NVIDIA's full-stack AI computing platform with OCI's performance, security, and deployment flexibility enables us to deliver AI capabilities at scale to help advance AI efforts globally."
The partnership includes making NVIDIA GB200 NVL72 systems available on the OCI Supercluster, supporting up to 131,072 NVIDIA Blackwell GPUs. The new architecture provides a liquid-cooled infrastructure that targets large-scale AI training and inference requirements. Governments and enterprises can take advantage of the so-called AI factories, using platforms like NVIDIA's GB200 NVL72 for agentic AI tasks reliant on advanced reasoning models and efficiency enhancements.
Developer access to advanced resources
Oracle has become one of the first major cloud providers to integrate with NVIDIA DGX Cloud Lepton, which links developers to a global marketplace of GPU compute. This integration offers developers access to OCI's high-performance GPU clusters for a range of needs, including AI training, inference, digital twin implementations, and parallel HPC applications.
Ian Buck, Vice President of Hyperscale and HPC at NVIDIA, said: "Developers need the latest AI infrastructure and software to rapidly build and launch innovative solutions. With OCI and NVIDIA, they get the performance and tools to bring ideas to life, wherever their work happens."
With this integration, developers are also able to select compute resources in precise regions to help achieve both strategic and sovereign AI aims and satisfy long-term and on-demand requirements.
Customer projects using joint capabilities
Enterprises in Europe and internationally are making use of the enhanced partnership between Oracle and NVIDIA. For example, Almawave, based in Italy, utilises OCI AI infrastructure and NVIDIA Hopper GPUs to run generative AI model training and inference for its Velvet family, which supports Italian alongside other European languages and is being deployed within Almawave's AIWave platform. "Our commitment is to accelerate innovation by building a high-performing, transparent, and fully integrated Italian foundational AI in a European context—and we are only just getting started," said Valeria Sandei, Chief Executive Officer, Almawave. "Oracle and NVIDIA are valued partners for us in this effort, given our common vision around AI and the powerful infrastructure capabilities they bring to the development and operation of Velvet."
Danish health technology company Cerebriu is using OCI along with NVIDIA Hopper GPUs to build an AI-driven tool for clinical brain MRI analysis. Cerebriu's deep learning models, trained on thousands of multi-modal MRI images, aim to reduce the time required to interpret scans, potentially benefiting the clinical diagnosis of time-sensitive neurological conditions. "AI plays an increasingly critical role in how we design and differentiate our products," said Marko Bauer, Machine Learning Researcher, Cerebriu. "OCI and NVIDIA offer AI capabilities that are critical to helping us advance our product strategy, giving us the computing resources we need to discover and develop new AI use cases quickly, cost-effectively, and at scale. Finding the optimal way of training our models has been a key focus for us. While we've experimented with other cloud platforms for AI training, OCI and NVIDIA have provided us the best cloud infrastructure availability and price performance."
By expanding the Oracle-NVIDIA partnership, customers are now able to choose from a wide variety of AI tools and infrastructure options within OCI, supporting both research and production environments for AI solution development.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
3 days ago
- Techday NZ
HPE expands ProLiant servers & AI cloud with new NVIDIA GPUs
HPE has announced a series of updates to its NVIDIA AI Computing by HPE portfolio, emphasising expanded server capabilities and deeper integration with NVIDIA AI Enterprise solutions. New server models The HPE ProLiant Compute range will soon include servers featuring the NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, available in a 2U form factor. Two main configurations will be available: the HPE ProLiant DL385 Gen11 server, supporting up to two of the new GPUs in a 2U chassis, and the previously announced HPE ProLiant Compute DL380a Gen12 server, capable of using up to eight GPUs in a 4U form factor. According to HPE, the latter configuration will be shipping in September. These servers are designed for broad enterprise applications such as generative and agentic AI, robotics and industrial AI, visual computing (including autonomous vehicles and quality control monitoring), simulation, 3D modelling, digital twins, and enterprise applications. The HPE ProLiant Compute Gen12 servers also include hardware features aimed at enhancing security and operational efficiency, such as the HPE Integrated Lights Out (iLO) 7 Silicon Root of Trust and a secure enclave for tamper-resistant protection and quantum-resistant firmware signing. HPE estimates that its Compute Ops Management, a cloud-native tool for managing server lifecycles, can decrease IT hours for server management by up to 75 percent and reduce downtime by approximately 4.8 hours per server each year. HPE Private Cloud AI HPE's Private Cloud AI offering, which has been co-developed with NVIDIA, is being updated to support the latest NVIDIA GPU technologies and AI models. The next version will offer compatibility for NVIDIA RTX PRO 6000 GPUs on Gen12 servers and aims to provide seamless scalability across different GPU generations. Features will include air-gapped management for security and support for enterprise multi-tenancy. The new release of HPE Private Cloud AI will integrate recent NVIDIA AI models, including the NVIDIA Nemotron models focused on agentic AI, Cosmos Reason vision language model (VLM) for physical AI and robotics, and the NVIDIA Blueprint for Video Search and Summarization (VSS 2.4), intended to build video analytics AI agents that can process large volumes of video data. Customers will have access to these developments through the HPE AI Essentials platform, enabling the quick deployment of NVIDIA NIM microservices and other AI tools. Through the continued collaboration, HPE Private Cloud AI is designed to deliver an integrated solution that leverages NVIDIA's portfolio in AI accelerated computing, networking, and software. This enables businesses to address increasing demand for AI inferencing and to accelerate the development and deployment of AI systems, maintaining high security and control over enterprise data. Collaboration and customer impact "HPE is committed to empowering enterprises with the tools they need to succeed in the age of AI," said Cheri Williams, senior vice president and general manager for private cloud and flex solutions at HPE. "Our collaboration with NVIDIA continues to push the boundaries of innovation, delivering solutions that unlock the value of generative, agentic and physical AI while addressing the unique demands of enterprise workloads. With the combination of HPE ProLiant servers and expanded capabilities in HPE Private Cloud AI, we're enabling organizations to embrace the future of AI with confidence and agility." Justin Boitano, vice president of enterprise AI at NVIDIA, commented, "Enterprises need flexible, efficient infrastructure to keep pace with the demands of modern AI. With NVIDIA RTX PRO 6000 Blackwell GPUs in HPE's 2U ProLiant servers, enterprises can accelerate virtually every workload on a single, unified, enterprise-ready platform." Availability According to HPE, the HPE ProLiant DL385 Gen11 and HPE ProLiant Compute DL380a Gen12 servers featuring the NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs are now orderable and are set for distribution worldwide beginning 2 September 2025. Support within HPE Private Cloud AI for the latest NVIDIA Nemotron models, Cosmos Reason, and the NVIDIA Blueprint for VSS 2.4 is scheduled for release in the second half of 2025. The next generation of HPE Private Cloud AI with the updated GPU capabilities is also expected to be available during this period.


Techday NZ
3 days ago
- Techday NZ
Oracle & Google Cloud boost AI with Gemini model access
Oracle and Google Cloud have expanded their partnership to provide Oracle customers with direct access to Google's Gemini AI models through the Oracle Cloud Infrastructure Generative AI service. The collaboration gives Oracle customers the ability to leverage Gemini 2.5 and its upcoming model family for enterprise-grade applications, including advanced coding, workflow automation, and domain-specific solutions such as MedLM for healthcare. Expanded AI offerings Through the integration, enterprises will have the opportunity to use Gemini's multimodal capabilities, enabling applications that can handle text, code, and industry-specific tasks. Oracle plans further integrations with Google Cloud's Vertex AI, which will make the entire Gemini model suite - including video, image, speech, and music generation - accessible within Oracle Fusion Cloud Applications across various departments such as finance, HR, supply chain, sales, service, and marketing. Oracle customers will also be able to deploy Gemini models using their existing Oracle Universal Credits, potentially simplifying adoption and controlling costs. Use cases and industry impact Gemini models are designed to provide accuracy and performance for enterprise use cases, partly due to their grounding in up-to-date Google Search data, large context windows, and data privacy features. The models can be used for knowledge retrieval, productivity tools, advanced software development, and sector-specific solutions. Specialised industry models like MedLM for healthcare are among the offerings expected for future integration. The presence of these models within existing Oracle platforms aims to streamline the adoption of AI across industries, supporting teams in tasks that range from automating business processes to building AI-powered agents. Customer access and integration With the expanded partnership, Oracle states customers will have more flexibility and choice over the models they use. As future integrations are developed, customers will be able to select from a range of Gemini models via Vertex AI, directly within Oracle's cloud applications ecosystem. "Today, leading enterprises are using Gemini to power AI agents across a range of use cases and industries," said Thomas Kurian, CEO, Google Cloud. "Now, Oracle customers can access our leading models from within their Oracle environments, making it even easier for them to begin deploying powerful AI agents that can support developers, streamline data integration tasks, and much more." Google's Gemini models have been cited for their enterprise suitability due to features such as encryption, privacy controls, and reasoning abilities. Clay Magouyrk, President, Oracle Cloud Infrastructure, stated, "Oracle has been intentional in offering model choice curated for the enterprise, spanning open and proprietary models. The availability of Gemini on OCI Generative AI service highlights our focus on delivering powerful, secure, and cost-effective AI solutions that help customers drive innovation and achieve their business goals." Performance and scalability Oracle continues to position its infrastructure as a foundation for running intensive AI workloads. According to the companies, Oracle Cloud Infrastructure offers specialised, cost-effective GPU instances suitable for applications in generative AI, natural language processing, computer vision, and recommender systems. The collaboration is described as a means for customers to apply generative and agentic AI to business needs, with a focus on meeting enterprise requirements for security, adaptability, and performance. Through this partnership, both companies aim to facilitate the deployment of multimodal and AI agent technologies in a broad range of enterprise scenarios.


Techday NZ
3 days ago
- Techday NZ
Oracle & Google Cloud partner to deliver Gemini AI on OCI
Oracle and Google Cloud have reached a new agreement to provide Google's Gemini artificial intelligence models through Oracle Cloud Infrastructure for enterprise customers. This collaboration means developers and businesses in the UK and elsewhere can now access Gemini's text, image, video and audio capabilities directly through Oracle's cloud platform. Oracle customers will be able to use their existing Oracle Cloud credits for Google AI services and integrate advanced AI capabilities into their existing workflows without needing to move between different platforms. The launch follows Oracle's previously announced partnership with xAI and extends Oracle's approach of providing a selection of AI models to customers. The new partnership also offers Google Cloud a greater presence in the enterprise market, where Oracle's cloud services are widely used for business-critical applications and data. Gemini models via Oracle Oracle has stated it will make the full range of Google's Gemini models available through its Generative AI service, starting with the Gemini 2.5 model. Integration with Google's Vertex AI platform will expand support to include advanced models for video, image, speech, music generation, and specialised industry use cases such as MedLM. The companies plan to integrate Gemini with Oracle Fusion Cloud Applications in the future, providing customers the option to incorporate these models into workflows across finance, HR, supply chain, sales, service, and marketing processes. To facilitate adoption, Oracle customers can access Google's Gemini models using their existing Oracle Universal Credits. "Today, leading enterprises are using Gemini to power AI agents across a range of use cases and industries. Now, Oracle customers can access our leading models from within their Oracle environments, making it even easier for them to begin deploying powerful AI agents that can support developers, streamline data integration tasks, and much more," said Thomas Kurian, CEO, Google Cloud. Google Cloud highlights Gemini's capacity for grounding outputs in up-to-date Google Search data for greater response accuracy, as well as support for large context windows, robust encryption, and data privacy. According to the company, these features support reasoning abilities suitable for complex enterprise needs. Model choice and security focus Clay Magouyrk, President, Oracle Cloud Infrastructure, said, "Oracle has been intentional in offering model choice curated for the enterprise, spanning open and proprietary models. The availability of Gemini on OCI Generative AI service highlights our focus on delivering powerful, secure, and cost-effective AI solutions that help customers drive innovation and achieve their business goals." Oracle's approach brings AI technologies closer to customer data with a focus on enterprise security, adaptability, and scalability. The availability of Google Cloud's Gemini models is set to support customers across different industries seeking to apply generative and agentic AI solutions to business scenarios for immediate results. Thousands of organisations already use Oracle Cloud Infrastructure's AI tools to handle demanding workloads, including applications for generative AI, natural language processing, computer vision, and recommendation systems. High-performance GPU instances on Oracle's infrastructure support these advanced AI applications. This partnership builds on Oracle's wider AI strategy and expands the options for Oracle customers seeking to deploy generative AI without moving their data or applications to other platforms. It reinforces Oracle's positioning for enterprises that wish to access both proprietary and open AI models with a single provider. The collaboration also extends the reach of Google Cloud's AI services into the Oracle enterprise ecosystem.