logo
#

Latest news with #NVIDIANIM

GMI Cloud to Build the Next Era of AI with NVIDIA
GMI Cloud to Build the Next Era of AI with NVIDIA

Yahoo

time19-05-2025

  • Business
  • Yahoo

GMI Cloud to Build the Next Era of AI with NVIDIA

MOUNTAIN VIEW, Calif., May 19, 2025 /PRNewswire/ -- GMI Cloud, a fast-rising provider of GPU-as-a-Service infrastructure purpose-built for AI, announced today it is among the first GPU cloud providers to contribute to NVIDIA DGX Cloud Lepton, a recently announced AI platform and marketplace designed to connect the world's developers with global compute capacity. As an NVIDIA Cloud Partner, GMI Cloud will bring high-performance GPU infrastructure, including NVIDIA Blackwell and other leading architectures, to the NVIDIA DGX Cloud Lepton. This integration gives developers access to GMI Cloud's globally distributed infrastructure, supporting everything from low-latency real-time inference to long-term, sovereign AI workloads. What this unlocks for AI builders and developersDGX Cloud Lepton addresses a critical challenge for developers: securing access to reliable, high-performance GPU resources at scale in a unified way. DGX Cloud Lepton addresses this challenge with a unified platform that simplifies development, training, and deployment of AI. The platform integrates directly with NVIDIA's software stack, including NVIDIA NIM microservices, NVIDIA NeMo, NVIDIA Blueprints, and NVIDIA Cloud Functions, to make the journey from prototype to production faster and more efficient. Why this matters to buildersAt GMI Cloud, we're contributing to DGX Cloud Lepton by offering: Direct access to NVIDIA GPU clusters optimized for cost, scale, and performance Strategic regional availability to meet compliance and latency needs Full-stack infrastructure ownership that allows us to deliver unbeatable economics to our customers Fast deployment pipelines powered by a robust toolchain and NVIDIA's integrated software stack Whether you're deploying LLMs, building autonomous systems, or scaling inference, GMI Cloud is here to help you Build AI Without Limits as part of DGX Cloud Lepton, beginning with 16 node clusters available on the marketplace. "DGX Cloud Lepton reflects everything we believe in at GMI Cloud: speed, sovereignty, and scale without compromise," said Alex Yeh, CEO of GMI Cloud. "We built our infrastructure from the silicon up to help developers build AI without limits. This partnership accelerates that vision." For early access and to discover GMI Cloud GPUs, visit the developer portal at DGX Cloud Lepton and to learn more about the next era of AI visit our blog today. About GMI CloudGMI Cloud delivers full-stack, U.S.-based GPU infrastructure and enterprise-grade inference services built to scale AI products. Whether training foundation models or deploying real-time agents, GMI gives teams full control of performance, costs, and launch velocity. With on-demand and reserved GPU clusters for all workloads and projects, GMI helps AI teams build without limits. GMI Cloud is based out of Mountain View, CA. View original content to download multimedia: SOURCE GMI Cloud Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DDN Teams With NVIDIA to Help Enterprises Transform Unstructured Data into AI-Driven Business Value
DDN Teams With NVIDIA to Help Enterprises Transform Unstructured Data into AI-Driven Business Value

Yahoo

time19-05-2025

  • Business
  • Yahoo

DDN Teams With NVIDIA to Help Enterprises Transform Unstructured Data into AI-Driven Business Value

CHATSWORTH, Calif., May 19, 2025--(BUSINESS WIRE)--DDN®, the global leader in AI and data intelligence solutions, today announced it is redefining enterprise infrastructure by turning storage into an intelligent, AI-native platform that fuels real-time insight, faster decisions, and measurable impact with the NVIDIA AI Data Platform reference design. DDN solutions built with the design in collaboration with NVIDIA empower organizations to unlock the business potential of generative AI by simplifying how they store, access, and activate their most valuable asset: unstructured data. In the age of agentic AI platforms and scalable AI factories, enterprises must fundamentally reimagine their relationship with data. It's no longer just about storing information—it's about activating it. Data has become the fuel, the fabric, and the foundation of competitive advantage. "At DDN, we are not just keeping pace with this transformation—we're defining it," said Santosh Erram, Global Head of Partnerships—NVIDIA and Hyperscaler, at DDN. "If your data infrastructure isn't purpose-built for AI, then your AI strategy is already at risk. DDN is where data meets intelligence, and where the future of enterprise AI is being built." Turning Unstructured Data into Action With more than 90% of new data being unstructured—ranging from documents and videos to code and conversations—organizations are sitting on a goldmine of insights. But traditional systems weren't designed to support the speed, scale, and intelligence that AI now demands. The solution from DDN and NVIDIA solves this by combining DDN Infinia™, an AI-native data platform, with NVIDIA NIM and NeMo Retriever microservices, NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, and NVIDIA Networking. This enables enterprises to deploy Retrieval-Augmented Generation (RAG) pipelines and intelligent AI applications grounded in their own proprietary data—securely, efficiently, and at scale. "Intelligent storage is key to transforming enterprise data into real-time intelligence for agentic AI reasoning," said Pat Lee, vice president, Enterprise Strategic Partnerships at NVIDIA. "Together, DDN and NVIDIA are building storage systems with accelerated computing, networking, and software to drive AI applications that can automate operations and amplify people's productivity." Business Outcomes That Matter Across industries—from finance and healthcare to telecom and manufacturing—organizations using DDN and NVIDIA solutions are realizing transformative results: Accelerated decision-making through conversational access to internal knowledge Enhanced customer engagement with domain-specific chatbots and digital assistants Lower total cost of ownership by reducing infrastructure sprawl and maximizing AI throughput Stronger data governance through on-prem, hybrid, or sovereign deployment models Infrastructure for the AI Factory Era Leading AI adopters are no longer just running pilots—they are building AI factories: repeatable, scalable pipelines that turn data into decisions. This next phase of enterprise AI requires a new foundation—one where storage is intelligent, high-performance, and seamlessly connected to AI pipelines. "AI has changed the rules—and DDN is redefining what enterprise data infrastructure must look like," said Sven Oehme, CTO and President at DDN. "Our work with NVIDIA accelerates time-to-value and allows enterprises to move faster and smarter in the AI economy." Learn More: Explore NVIDIA's AI Data Platform at or learn more about the DDN and NVIDIA partnership at About DDN DDN is the world's leading AI and data intelligence company, empowering organizations to maximize the value of their data with end-to-end HPC and AI-focused solutions. Its customers range from the largest global enterprises and AI hyperscalers to cutting-edge research centers, all leveraging DDN's proven data intelligence platform for scalable, secure, and high-performance AI deployments that drive 10x returns. Follow DDN: LinkedIn, X, and YouTube. View source version on Contacts DDN Media Contact:Amanda LeeVP, Marketing – Analyst and Media Relationsamlee@ Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Nutanix Enables Agentic AI Anywhere With Latest Release Of Nutanix Enterprise AI
Nutanix Enables Agentic AI Anywhere With Latest Release Of Nutanix Enterprise AI

Scoop

time08-05-2025

  • Business
  • Scoop

Nutanix Enables Agentic AI Anywhere With Latest Release Of Nutanix Enterprise AI

Nutanix (NASDAQ: NTNX), a leader in hybrid multicloud computing, today announced the general availability of the latest version of the Nutanix Enterprise AI (NAI) solution, adding deeper integration with NVIDIA AI Enterprise, including NVIDIA NIM microservices and the NVIDIA NeMo framework, to speed the deployment of Agentic AI applications in the enterprise. NAI is designed to accelerate the adoption of generative AI in the enterprise by simplifying how customers build, run, and securely manage models and inferencing services at the edge, in the data centre, and in public clouds on any Cloud Native Computing Foundation® (CNCF)-certified Kubernetes® environment. The latest NAI release extends a shared model service methodology that simplifies agentic workflows, helping to make deployment and day two operations simpler. It streamlines the resources and models required to deploy multiple applications across lines of business with a secure, common set of embedding, reranking, and guardrail functional models for agents. This builds on the NAI core, which includes a centralised LLM model repository that creates secure endpoints that make connecting generative AI applications and agents simple and private. 'Nutanix is helping customers keep up with the fast pace of innovation in the Gen AI market,' said Thomas Cornely, SVP of Product Management at Nutanix. 'We've expanded Nutanix Enterprise AI to integrate new NVIDIA NIM and NeMo microservices so that enterprise customers can securely and efficiently build, run, and manage AI Agents anywhere.' 'Enterprises require sophisticated tools to simplify agentic AI development and deployment across their operations,' said Justin Boitano, Vice President of Enterprise AI Software Products at NVIDIA. 'Integrating NVIDIA AI Enterprise software including NVIDIA NIM microservices and NVIDIA NeMo into Nutanix Enterprise AI provides a streamlined foundation for building and running powerful and secure AI agents.' NAI for agentic applications can help customers: Deploy Agentic AI Applications with Shared LLM Endpoints - Customers can reuse existing deployed model endpoints as shared services for multiple applications. This re-use of model endpoints helps reduce usage of critical infrastructure components, including GPUs, CPUs, memory, file and object storage, and Kubernetes® clusters. Leverage a Wide Array of LLM Endpoints - NAI enables a range of agentic model services, including NVIDIA Llama Nemotron open reasoning models, NVIDIA NeMo Retriever and NeMo Guardrails.. NAI users can leverage NVIDIA AI Blueprints, which are pre-defined, customisable workflows, to jumpstart the development of their own AI applications that leverage NVIDIA models and AI microservices. In addition, NAI enables function calling for the configuration and consumption of external data sources to help AI agentic applications deliver more accurate and detailed results. Support Generative AI Safety - This new NAI release will help customers implement agentic applications in ways consistent with their organisation's policies using guardrail models. These models can filter initial user queries and LLM responses to prevent biased or harmful outputs and can also maintain topic control and jailbreak attempt detection. For example, NVIDIA NeMo Guardrails are LLMs that provide content filtering to filter out unwanted content and other sensitive topics. These can also be applied to code generation, providing improved reliability and consistency across models. Unlock Insights From Data with NVIDIA AI Data Platform - The Nutanix Cloud Platform solution builds on the NVIDIA AI Data Platform reference design and integrates the Nutanix Unified Storage and the Nutanix Database Service solutions for unstructured and structured data for AI. The Nutanix Cloud Infrastructure platform provides a private foundation for NVIDIA's accelerated computing, networking, and AI software to turn data into actionable intelligence. As an NVIDIA-Certified Enterprise Storage solution, Nutanix Unified Storage meets rigorous performance and scalability standards, providing software-defined enterprise storage for enterprise AI workloads, through capabilities such as NVIDIA GPUDirect Storage. NAI is designed to use additional Nutanix platform services while allowing flexible deployments on HCI, bare metal, and cloud IaaS. NAI customers can also leverage the Nutanix Kubernetes Platform solution for multicloud fleet management of containerised cloud native applications, and Nutanix Unified Storage (NUS) and Nutanix Database Service (NDB) as discrete data services, offering a complete platform for agentic AI applications. 'Customers can realise the full potential of generative AI without sacrificing control, which is especially important as businesses expand into agentic capabilities,' said Scott Sinclair, Practice Director, ESG. "This expanded partnership with NVIDIA provides organisations an optimised solution for agentic AI minimising the risk of managing complex workflows while also safeguarding deployment through secure endpoint creation for APIs. AI initiatives are employed to deliver strategic advantages, but those advantages can't happen without optimised infrastructure control and security." To learn more about how to get started with the latest NAI version and new NVIDIA capabilities, visit our latest blog post. NAI with agentic model support is now generally available. About Nutanix Nutanix is a global leader in cloud software, offering organizations a single platform for running applications and managing data, anywhere. With Nutanix, companies can reduce complexity and simplify operations, freeing them to focus on their business outcomes. Building on its legacy as the pioneer of hyperconverged infrastructure, Nutanix is trusted by companies worldwide to power hybrid multicloud environments consistently, simply, and cost-effectively. Learn more at or follow us on social media @nutanix.

NVIDIA launches AI Blueprint for 3D image creation in Blender
NVIDIA launches AI Blueprint for 3D image creation in Blender

Techday NZ

time07-05-2025

  • Business
  • Techday NZ

NVIDIA launches AI Blueprint for 3D image creation in Blender

NVIDIA has introduced a new AI Blueprint that enables users to generate images guided by 3D scenes using Blender. This tool combines basic 3D scene design with depth mapping to help control image composition more precisely during AI generation. At the core of the process is FLUX.1-dev, an AI model developed by Black Forest Labs, which interprets the scene's spatial layout alongside text prompts to produce visuals that match the intended design. Depth maps play a crucial role by providing the spatial context needed for the model to understand scene structure. This technique simplifies the process by removing the need for detailed textures or complex objects, instead relying on general spatial information. With the scenes rendered in 3D, users have the flexibility to move elements and adjust camera angles to suit their creative goals. The Blueprint includes an NVIDIA NIM microservice that helps deploy the FLUX.1-dev model efficiently on RTX GPUs, using TensorRT for faster inference. It's packaged with an installer and comprehensive deployment instructions, making it accessible for AI artists looking to integrate generative tools into their workflow. Beyond entry-level users, the Blueprint is also designed to accommodate advanced developers. It offers a customisable pipeline that can be modified for more sophisticated needs. NVIDIA provides supporting materials like sample assets, detailed documentation, and a preconfigured environment to help streamline experimentation and creation. Optimised for NVIDIA RTX AI PCs and workstations, the solution benefits from the company's Blackwell architecture. The FLUX.1-dev model has been fine-tuned using TensorRT and quantised to FP4 precision, resulting in more than double the inference speed compared to traditional FP16 PyTorch implementations. There are also FP8 model versions tailored for GPUs based on the Ada Lovelace architecture, further expanding compatibility and performance. Quantising to FP4 significantly reduces the model size, lowering memory requirements while maintaining high performance, cutting VRAM needs by more than half compared to FP16. Currently, NVIDIA offers ten NIM microservices across areas like image generation, natural language processing, speech AI, and computer vision. The company plans to continue building out its portfolio with more AI blueprints and services designed to accelerate creative and technical workflows.

Nutanix Enables Agentic AI Anywhere with Latest Release of Nutanix Enterprise AI
Nutanix Enables Agentic AI Anywhere with Latest Release of Nutanix Enterprise AI

Yahoo

time07-05-2025

  • Business
  • Yahoo

Nutanix Enables Agentic AI Anywhere with Latest Release of Nutanix Enterprise AI

Nutanix, Inc. Integration with NVIDIA AI Enterprise adds new capabilities to accelerate the adoption of production agentic workloads in the enterprise WASHINGTON, May 07, 2025 (GLOBE NEWSWIRE) -- .NEXT Conference -- Nutanix (NASDAQ: NTNX ), a leader in hybrid multicloud computing, today announced the general availability of the latest version of the Nutanix Enterprise AI (NAI) solution, adding deeper integration with NVIDIA AI Enterprise, including NVIDIA NIM microservices and the NVIDIA NeMo framework, to speed the deployment of Agentic AI applications in the enterprise. NAI is designed to accelerate the adoption of generative AI in the enterprise by simplifying how customers build, run, and securely manage models and inferencing services at the edge, in the data center, and in public clouds on any Cloud Native Computing Foundation® (CNCF)-certified Kubernetes® environment. The latest NAI release extends a shared model service methodology that simplifies agentic workflows, helping to make deployment and day two operations simpler. It streamlines the resources and models required to deploy multiple applications across lines of business with a secure, common set of embedding, reranking, and guardrail functional models for agents. This builds on the NAI core, which includes a centralized LLM model repository that creates secure endpoints that make connecting generative AI applications and agents simple and private. 'Nutanix is helping customers keep up with the fast pace of innovation in the Gen AI market,' said Thomas Cornely, SVP of Product Management at Nutanix. 'We've expanded Nutanix Enterprise AI to integrate new NVIDIA NIM and NeMo microservices so that enterprise customers can securely and efficiently build, run, and manage AI Agents anywhere.' 'Enterprises require sophisticated tools to simplify agentic AI development and deployment across their operations,' said Justin Boitano, Vice President of Enterprise AI Software Products at NVIDIA. 'Integrating NVIDIA AI Enterprise software including NVIDIA NIM microservices and NVIDIA NeMo into Nutanix Enterprise AI provides a streamlined foundation for building and running powerful and secure AI agents.' NAI for agentic applications can help customers:

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store