Latest news with #DGXCloudLepton
Yahoo
19-05-2025
- Business
- Yahoo
Nvidia (NVDA) Opens NVLink Fusion to Outside Chips in AI Push
Nvidia (NVDA, Financials) is loosening its grip. At Computex 2025, the chipmaker launched NVLink Fusion, a new effort to let outside CPUs and ASICs work with its GPUspart of a broader bid to remain indispensable in AI infrastructure, even when competitors build the rest. Warning! GuruFocus has detected 4 Warning Signs with NVDA. NVLink used to be Nvidia-only. Not anymore. Fusion cracks open the ecosystem, making room for third-party processors to plug into Nvidia's platform. Early partners include MediaTek, Marvell Technology (MRVL, Financials), Synopsys (SNPS, Financials), Cadence Design Systems (CDNS, Financials), and Alchip. Customers like Qualcomm (QCOM, Financials) now have more room to mix and match. That flexibility matters. Nvidia already dominates general-purpose AI chips, but rivals like Amazon and Google are pouring money into custom silicon. Analysts say Fusion helps Nvidia stay in the roomeven if it doesn't own all the furniture. There's a tradeoff. Fusion could reduce demand for Nvidia's own CPUs, but makes its GPUs harder to replace in hybrid systems. Broadcom (AVGO, Financials), Advanced Micro Devices (AMD, Financials), and Intel (INTC, Financials) are not part of this first wave. Nvidia also introduced DGX Cloud Lepton, a kind of GPU marketplace to help AI developers find cloud horsepower. And it teased the GB300, its next Grace Blackwell system, coming in Q3 2025. The company is also going bigger in Taiwan, opening a new office and working with Foxconn on a local AI supercomputer. If Fusion catches on, Nvidia could tighten its hold on the AI data centerone link at a time. See insider trades for NVDA. Explore Peter Lynch chart. This article first appeared on GuruFocus.
Yahoo
19-05-2025
- Business
- Yahoo
Nvidia (NVDA) Opens NVLink Fusion to Outside Chips in AI Push
Nvidia (NVDA, Financials) is loosening its grip. At Computex 2025, the chipmaker launched NVLink Fusion, a new effort to let outside CPUs and ASICs work with its GPUspart of a broader bid to remain indispensable in AI infrastructure, even when competitors build the rest. Warning! GuruFocus has detected 4 Warning Signs with NVDA. NVLink used to be Nvidia-only. Not anymore. Fusion cracks open the ecosystem, making room for third-party processors to plug into Nvidia's platform. Early partners include MediaTek, Marvell Technology (MRVL, Financials), Synopsys (SNPS, Financials), Cadence Design Systems (CDNS, Financials), and Alchip. Customers like Qualcomm (QCOM, Financials) now have more room to mix and match. That flexibility matters. Nvidia already dominates general-purpose AI chips, but rivals like Amazon and Google are pouring money into custom silicon. Analysts say Fusion helps Nvidia stay in the roomeven if it doesn't own all the furniture. There's a tradeoff. Fusion could reduce demand for Nvidia's own CPUs, but makes its GPUs harder to replace in hybrid systems. Broadcom (AVGO, Financials), Advanced Micro Devices (AMD, Financials), and Intel (INTC, Financials) are not part of this first wave. Nvidia also introduced DGX Cloud Lepton, a kind of GPU marketplace to help AI developers find cloud horsepower. And it teased the GB300, its next Grace Blackwell system, coming in Q3 2025. The company is also going bigger in Taiwan, opening a new office and working with Foxconn on a local AI supercomputer. If Fusion catches on, Nvidia could tighten its hold on the AI data centerone link at a time. See insider trades for NVDA. Explore Peter Lynch chart. This article first appeared on GuruFocus.
Yahoo
19-05-2025
- Business
- Yahoo
GMI Cloud to Build the Next Era of AI with NVIDIA
MOUNTAIN VIEW, Calif., May 19, 2025 /PRNewswire/ -- GMI Cloud, a fast-rising provider of GPU-as-a-Service infrastructure purpose-built for AI, announced today it is among the first GPU cloud providers to contribute to NVIDIA DGX Cloud Lepton, a recently announced AI platform and marketplace designed to connect the world's developers with global compute capacity. As an NVIDIA Cloud Partner, GMI Cloud will bring high-performance GPU infrastructure, including NVIDIA Blackwell and other leading architectures, to the NVIDIA DGX Cloud Lepton. This integration gives developers access to GMI Cloud's globally distributed infrastructure, supporting everything from low-latency real-time inference to long-term, sovereign AI workloads. What this unlocks for AI builders and developersDGX Cloud Lepton addresses a critical challenge for developers: securing access to reliable, high-performance GPU resources at scale in a unified way. DGX Cloud Lepton addresses this challenge with a unified platform that simplifies development, training, and deployment of AI. The platform integrates directly with NVIDIA's software stack, including NVIDIA NIM microservices, NVIDIA NeMo, NVIDIA Blueprints, and NVIDIA Cloud Functions, to make the journey from prototype to production faster and more efficient. Why this matters to buildersAt GMI Cloud, we're contributing to DGX Cloud Lepton by offering: Direct access to NVIDIA GPU clusters optimized for cost, scale, and performance Strategic regional availability to meet compliance and latency needs Full-stack infrastructure ownership that allows us to deliver unbeatable economics to our customers Fast deployment pipelines powered by a robust toolchain and NVIDIA's integrated software stack Whether you're deploying LLMs, building autonomous systems, or scaling inference, GMI Cloud is here to help you Build AI Without Limits as part of DGX Cloud Lepton, beginning with 16 node clusters available on the marketplace. "DGX Cloud Lepton reflects everything we believe in at GMI Cloud: speed, sovereignty, and scale without compromise," said Alex Yeh, CEO of GMI Cloud. "We built our infrastructure from the silicon up to help developers build AI without limits. This partnership accelerates that vision." For early access and to discover GMI Cloud GPUs, visit the developer portal at DGX Cloud Lepton and to learn more about the next era of AI visit our blog today. About GMI CloudGMI Cloud delivers full-stack, U.S.-based GPU infrastructure and enterprise-grade inference services built to scale AI products. Whether training foundation models or deploying real-time agents, GMI gives teams full control of performance, costs, and launch velocity. With on-demand and reserved GPU clusters for all workloads and projects, GMI helps AI teams build without limits. GMI Cloud is based out of Mountain View, CA. View original content to download multimedia: SOURCE GMI Cloud Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Mint
19-05-2025
- Business
- Mint
Nvidia pushes further into cloud with GPU marketplace
Nvidia is a relative newcomer to the cloud-computing game, but it's quickly gaining momentum. The semiconductor giant on Monday announced a service that makes its AI chips available on a variety of cloud platforms—widening access beyond the major cloud providers. The service, called DGX Cloud Lepton, is designed to link artificial intelligence developers with Nvidia's network of cloud providers, which provide access to its graphics processing units, or GPUs. Some of Nvidia's cloud provider partners include CoreWeave, Lambda and Crusoe. 'Nvidia DGX Cloud Lepton connects our network of global GPU cloud providers with AI developers," said Jensen Huang, chief executive of Nvidia in a statement. The news was announced at the Computex conference in Taiwan. Leading cloud service providers are expected to also participate, Nvidia said. The move makes its chips more widely accessible to developers of all kinds—not just those who have relationships with those tech giants, analysts say. 'We saw that there was a lot of friction in the system for AI developers, whether they're researchers or in an enterprise, to find and access computing resources," said Alexis Bjorlin, Nvidia's vice president of the DGX Cloud unit. DGX Cloud Lepton is a one-stop AI platform with a marketplace of GPU cloud vendors that developers can pick from to train and use their AI models, Nvidia said. Since the AI boom kicked off in late 2022, Nvidia's GPUs have been a hot commodity. Cloud providers have been racing to gobble up chips to support both their customers and their own internal AI efforts. But at any given time, cloud providers—including smaller players like CoreWeave—might have GPUs that aren't being used. That's where Lepton comes in, Bjorlin said, because it's a way for those providers to tell developers they have excess computing for AI. 'This is Nvidia's way to kind of be an aggregator of GPUs across clouds," said Ben Bajarin, chief executive and principal analyst of market-research firm Creative Strategies. Nvidia will be reaching developers directly, rather than going through its cloud-provider partners. That kind of direct outreach furthers Nvidia's aim of building its business with enterprises, and not just AI labs, said International Data Corp. analyst Mario Morales. The other benefit of working directly with the chip giant is that developers can choose which AI cloud provider to work with, or choose to work with multiple cloud providers, Nvidia's Bjorlin said. 'It is up to the developer to choose," Bjorlin added. 'Nvidia is not intersecting in that pathway." Write to Belle Lin at


Techday NZ
19-05-2025
- Business
- Techday NZ
NVIDIA launches DGX Cloud Lepton to link global GPU networks
NVIDIA has unveiled DGX Cloud Lepton, a platform aimed at linking developers with a worldwide network of cloud GPU providers through an AI compute marketplace. The DGX Cloud Lepton platform brings together GPU resources from companies such as CoreWeave, Crusoe, Firmus, Foxconn, GMI Cloud, Lambda, Nebius, Nscale, SoftBank Corp and Yotta Data Services. These partners are making tens of thousands of GPUs available through the platform, including those based on the NVIDIA Blackwell architecture, in order to support increasing demand for generative and physical AI applications. Developers using the platform are able to access GPU computing power in specific geographic regions for both on-demand and long-term requirements. This is aimed at supporting operational needs, such as strategic and sovereign AI initiatives which require compliance with regional data regulations. NVIDIA has also signalled that additional cloud service providers and GPU marketplaces are expected to join the DGX Cloud Lepton marketplace in the future. Jensen Huang, Founder and Chief Executive Officer of NVIDIA, said, "NVIDIA DGX Cloud Lepton connects our network of global GPU cloud providers with AI developers. Together with our NCPs, we're building a planetary-scale AI factory." The DGX Cloud Lepton platform is designed to tackle a consistent challenge in the AI sector: reliable access to sufficient, high-performance GPU resources. To address this, it combines access to cloud AI services and GPU capacity across the NVIDIA ecosystem. The system integrates with existing NVIDIA software, including NIM and NeMo microservices, NVIDIA Blueprints, and NVIDIA Cloud Functions. This aims to accelerate and streamline the development and deployment lifecycle for AI applications by providing unified tooling and interfaces. For cloud providers, the DGX Cloud Lepton platform includes management software that supplies real-time GPU health diagnostics and automation for root-cause analysis. This is intended to reduce manual intervention and lower system downtime. The platform is introduced with multiple touted benefits. In terms of productivity and flexibility, it delivers a single experience for development, training, and inference. Developers can buy GPU capacity straight from participating cloud providers or use their own clusters, offering greater autonomy over deployment. The platform also aims to allow frictionless deployment of AI applications across multi-cloud and hybrid settings, making it easier to handle inference, testing, and training on varying workloads. Another highlight is the agility provided to users. The ability to quickly access GPU resources in select regions is designed to help developers meet requirements for data sovereignty and support low-latency workloads. According to NVIDIA, the platform offers enterprise-level reliability, performance, and security, helping partners deliver a uniform user experience. NVIDIA also introduced Exemplar Clouds, a programme which assists cloud partners in improving security, usability, performance and resiliency. Exemplar Clouds utilise NVIDIA's reference hardware, software, operational tools, and benchmarking suite—DGX Cloud Benchmarking—to assist partners in optimising workload performance and evaluating cost-effectiveness. Yotta Data Services has been named as the first NVIDIA Cloud Partner in the Asia-Pacific region to participate in the Exemplar Cloud initiative. Developers can register for early access to the DGX Cloud Lepton platform. NVIDIA stated that further details about the programme and related technology advances can be explored at its events in Taipei.