Latest news with #HeatExchanger


Time of India
7 days ago
- Business
- Time of India
Amazon has found its 'own way' to cool down Nvidia's AI graphics cards
Amazon's cloud division has reportedly developed its own hardware to cool next-generation Nvidia graphics cards for artificial intelligence (AI) workloads. This internal solution addresses the significant energy consumption and heat generation associated with Nvidia's GPUs, which are vital for AI workloads. Dave Brown, Vice President of Compute and Machine Learning Services at Amazon Web Services (AWS), stated in a YouTube video that commercially available cooling equipment was not suitable and building data centres with widespread liquid cooling would have taken too much time. This led Amazon to develop its methods that can better manage the heat from these power-intensive Nvidia GPUs. What the AWS VP said about these tools developed for Nvidia GPUs Talking about the cooling equipment available for AI GPUs in the video, Brown said: 'They would take up too much data centre floor space or increase water usage substantially. And while some of these solutions could work for lower volumes at other providers, they simply wouldn't be enough liquid-cooling capacity to support our scale.' So, instead of relying on conventional solutions, Amazon engineers developed the In-Row Heat Exchanger (IRHX), a cooling system that can be integrated into both existing and future data centres. Previously, these traditional air cooling methods have been sufficient for earlier Nvidia chip generations. Introducing Amazon EC2 P6e-GB200 UltraServers: Powering Frontier AI at Scale | Amazon Web Services In a blog post, Brown also confirmed that AWS customers can now access this updated infrastructure through new P6e computing instances. These new offerings support Nvidia's high-density computing architecture, particularly the GB200 NVL72, which consolidates 72 Nvidia Blackwell GPUs into a single rack for training and deploying large AI models. Previously, similar Nvidia GB200 NVL72-based clusters were available via Microsoft and CoreWeave. AWS, as the leading global cloud infrastructure provider, continues to enhance its capabilities. Amazon has a history of developing its infrastructure hardware, including custom chips for general computing and AI, along with in-house-designed storage servers and networking equipment. This approach reduces reliance on external vendors and can improve profitability. AWS posted its highest operating margin since at least 2014 during the first quarter, contributing significantly to Amazon's overall net income. Microsoft, the second-largest cloud provider, has also moved into custom hardware. In 2023, it introduced a cooling system called Sidekicks, tailored for its Maia AI chips. What Is Artificial Intelligence? Explained Simply With Real-Life Examples AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Yahoo
10-07-2025
- Business
- Yahoo
Vertiv stock falls after AWS unveils custom cooling technology
-- Vertiv Holdings (NYSE:VRT) stock fell 11% Thursday, a day after Amazon Web Services (NASDAQ:AMZN) introduced its own custom cooling hardware designed specifically for Nvidia's (NASDAQ:NVDA) high-performance AI graphics processing units. AWS revealed its new In-Row Heat Exchanger (IRHX) cooling system on Wednesday, developed as an alternative to industry-standard liquid cooling solutions that companies like Vertiv provide. The custom solution allows AWS to accommodate Nvidia's heat-intensive GPU racks without requiring major data center renovations. The IRHX cooling system was designed to address the substantial thermal demands of Nvidia's latest Blackwell GPUs, which consume significant energy and generate considerable heat when running AI workloads. AWS Vice President Dave Brown explained that traditional cooling methods "would take up too much data center floor space or increase water usage substantially." Bloomberg Intelligence analyst Mustafa Okur noted the potential impact on Vertiv: "Amazon Web Services rolling out its own server liquid-cooling system could weigh on Vertiv's future growth prospects. Around 10% of overall sales come from liquid cooling, we calculate, and AWS may be one of the largest customers." AWS developed the cooling technology in partnership with Nvidia, taking just 11 months from design to production. The system combines liquid and air-based components, circulating coolant to GPU chips through cold plates while removing heat via fan-coil arrays. The cooling innovation coincides with AWS launching new computing instances that give customers access to Nvidia's most powerful AI server configurations, supported by AWS's Nitro infrastructure platform for networking and system monitoring. Related articles Vertiv stock falls after AWS unveils custom cooling technology Mizuho lifts targets on storage and memory stocks on strong AI demand HCW downgrades Coinbase to Sell, says rally has run ahead of fundamentals


CNBC
09-07-2025
- Business
- CNBC
Amazon Web Services is building equipment to cool Nvidia GPUs as AI boom accelerates
Amazon said Wednesday that its cloud division has developed hardware to cool down next-generation Nvidia graphics processing units that are used for artificial intelligence workloads. Nvidia's GPUs, which have powered the generative AI boom, require massive amounts of energy. That means companies using the processors need additional equipment to cool them down. Amazon considered erecting data centers that could accommodate widespread liquid cooling to make the most of these power-hungry Nvidia GPUs. But that process would have taken too long, and commercially available equipment wouldn't have worked, Dave Brown, vice president of compute and machine learning services at Amazon Web Services, said in a video posted to YouTube. "They would take up too much data center floor space or increase water usage substantially," Brown said. "And while some of these solutions could work for lower volumes at other providers, they simply wouldn't be enough liquid-cooling capacity to support our scale." Rather, Amazon engineers conceived of the In-Row Heat Exchanger, or IRHX, that can be plugged into existing and new data centers. More traditional air cooling was sufficient for previous generations of Nvidia chips. Customers can now access the AWS service as computing instances that go by the name P6e, Brown wrote in a blog post. The new systems accompany Nvidia's design for dense computing power. Nvidia's GB200 NVL72 packs a single rack with 72 Nvidia Blackwell GPUs that are wired together to train and run large AI models. Computing clusters based on Nvidia's GB200 NVL72 have previously been available through Microsoft or CoreWeave. AWS is the world's largest supplier of cloud infrastructure. Amazon has rolled out its own infrastructure hardware in the past. The company has custom chips for general-purpose computing and for AI, and designed its own storage servers and networking routers. In running homegrown hardware, Amazon depends less on third-party suppliers, which can benefit the company's bottom line. In the first quarter, AWS delivered the widest operating margin since at least 2014, and the unit is responsible for most of Amazon's net income. Microsoft, the second largest cloud provider, has followed Amazon's lead and made strides in chip development. In 2023, the company designed its own systems called Sidekicks to cool the Maia AI chips it developed.