
Modular Cooling Leap Meets AI's Soaring Heat Challenge
LiquidStack has introduced the GigaModular Coolant Distribution Unit, its most advanced liquid‑cooling solution yet, engineered to deliver up to 10 megawatts of scalable, direct‑to‑chip cooling. The debut of this system at the Datacloud Global Congress in Cannes, France, signals a pivotal advance in thermal infrastructure necessary for high‑density AI and cloud‑computing workloads.
As data centre rack densities climb past 120 kW and approach projections of 600 kW by 2027, conventional air‑cooled systems are nearing their operational ceiling. GigaModular's design offers operators a 'pay‑as‑you‑grow' modular platform that begins at 2.5 MW and expands seamlessly to 10 MW. Its flexibility allows deployment in N, N+1 or N+2 redundancy modes, ensuring robustness and future‑proof scaling.
LiquidStack CEO Joe Capes emphasised the imperative, stating: 'AI will keep pushing thermal output to new extremes, and data centres need cooling systems that can be easily deployed, managed and scaled to match heat rejection demands as they rise… we designed it to be the only CDU our customers will ever need'. At the heart of the system are high‑efficiency IE5‑rated pumps and dual BPHx heat exchangers, integrated with instrumentation kits for centralised tracking of pressure, temperature and flow.
ADVERTISEMENT
The modularity is further enhanced by a centralised control module separated from the pump module—an architectural choice aimed at reducing system complexity and reliability risks. Maintenance is simplified as well; the CDU is serviceable from the front, enabling placement flush against walls without sacrificing accessibility.
Available in both skid‑mounted units with pre‑installed rail and overhead piping, or as separate cabinets for on‑site integration, the system accommodates diverse deployment strategies. LiquidStack intends to begin quoting the GigaModular CDU by September 2025, with production at its Carrollton, Texas facility.
Industry analysts emphasise the significance of fully scalable, direct‑to‑chip liquid cooling in meeting AI infrastructure demands. RCR Wireless News highlights the solution's ability to 'future‑proof data centre thermal strategy' amid the transition to powerful GPUs such as Nvidia's upcoming B300 and GB300 lines. Similarly, Facilities Dive reports that the system potentially offers 25 per cent reductions in capital expenditures and floor space compared with traditional CDU arrangements, while operating in ambient temperatures up to 122 °F.
In recent major deployments, tech giants like Microsoft, Amazon and Meta have outlined plans for gigawatt‑scale data‑centre campuses fuelled by ultra‑dense racks drawing over 1 MW apiece. As server rack densities accelerate, cooling infrastructure must not only follow but anticipate demand. LiquidStack's GigaModular CDU addresses that by enabling modular real‑time expansion rather than upfront oversizing, a shift that can significantly reduce both capital and operational costs.
LiquidStack also continues to diversify its cooling offerings. Alongside single‑phase direct‑to‑chip solutions, the firm supports two‑phase immersion systems tailored for extreme‑density environments and retrofits. These complement its established MacroModular, MicroModular and DataTank offerings, all of which have been deployed for hyperscale, edge and co‑location environments.
ADVERTISEMENT
The global demand for liquid‑cooling data‑centre infrastructure is projected to grow from approximately $5.17 billion in 2025 to $15.75 billion by 2030. This expansion is driven by the thermal output of AI workloads, energy‑efficiency regulations, and the push for sustainable, water‑efficient operations. Liquid‑cooling offers far higher thermal transfer efficiency than air cooling, dramatically reducing power usage effectiveness and embodied environmental cost.
While the GigaModular marks a significant technical milestone, its commercial rollout will be decisive. Lessons from other capital‑intensive systems suggest that long procurement cycles, site readiness, and integration challenges may slow adoption. LiquidStack's provision of pre‑configured skids aims to mitigate such integration hurdles, and its September 2025 quoting timeline aligns with projected first shipments however, it remains to be seen how quickly hyperscalers and data‑centre operators can adopt the system at scale.
Several broader industry trends support strong uptake potential. Nvidia has forecast that rack power densities would reach 600 kW by 2027, while data‑centre developers are aggressively expanding capacity in response to AI‑compute demand. As sustainability pressure mounts, liquid‑cooling solutions gain favour since they drastically cut cooling‑related energy consumption and can facilitate heat reuse in co‑generation setups.
Risks remain—some operators may hesitate over the upfront investment, and supply‑chain constraints for pump modules or specialised control systems could affect delivery timelines. However, the pay‑as‑you‑grow modular model and forward‑thinking design appear to position LiquidStack favourably within evolving market dynamics.
The GigaModular also contributes to a renewed focus on automation and real‑time telemetry in data‑centre thermal management. Centralised instrumentation provides operators data‑driven insights, while scalable pump architectures decouple deployment capacity from physical footprint constraints—critical in dense, high‑performance computing environments.
LiquidStack's latest manufacturing footprint expansion in Texas underscores its plan to scale production. The company had earlier announced a second manufacturing facility there in March 2025, to support its growing direct‑to‑chip and immersion‑cooling roadmap.
In effect, LiquidStack is framing this launch not merely as a new cooling unit but as a strategic pivot in liquid‑cooling architecture: one that is standardised, modular and adaptable at hyperscale. As AI‑driven compute demand accelerates beyond petaflops toward exascale, liquid‑cooling infrastructure must evolve in tandem.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Arabian Post
7 hours ago
- Arabian Post
Modular Cooling Leap Meets AI's Soaring Heat Challenge
LiquidStack has introduced the GigaModular Coolant Distribution Unit, its most advanced liquid‑cooling solution yet, engineered to deliver up to 10 megawatts of scalable, direct‑to‑chip cooling. The debut of this system at the Datacloud Global Congress in Cannes, France, signals a pivotal advance in thermal infrastructure necessary for high‑density AI and cloud‑computing workloads. As data centre rack densities climb past 120 kW and approach projections of 600 kW by 2027, conventional air‑cooled systems are nearing their operational ceiling. GigaModular's design offers operators a 'pay‑as‑you‑grow' modular platform that begins at 2.5 MW and expands seamlessly to 10 MW. Its flexibility allows deployment in N, N+1 or N+2 redundancy modes, ensuring robustness and future‑proof scaling. LiquidStack CEO Joe Capes emphasised the imperative, stating: 'AI will keep pushing thermal output to new extremes, and data centres need cooling systems that can be easily deployed, managed and scaled to match heat rejection demands as they rise… we designed it to be the only CDU our customers will ever need'. At the heart of the system are high‑efficiency IE5‑rated pumps and dual BPHx heat exchangers, integrated with instrumentation kits for centralised tracking of pressure, temperature and flow. ADVERTISEMENT The modularity is further enhanced by a centralised control module separated from the pump module—an architectural choice aimed at reducing system complexity and reliability risks. Maintenance is simplified as well; the CDU is serviceable from the front, enabling placement flush against walls without sacrificing accessibility. Available in both skid‑mounted units with pre‑installed rail and overhead piping, or as separate cabinets for on‑site integration, the system accommodates diverse deployment strategies. LiquidStack intends to begin quoting the GigaModular CDU by September 2025, with production at its Carrollton, Texas facility. Industry analysts emphasise the significance of fully scalable, direct‑to‑chip liquid cooling in meeting AI infrastructure demands. RCR Wireless News highlights the solution's ability to 'future‑proof data centre thermal strategy' amid the transition to powerful GPUs such as Nvidia's upcoming B300 and GB300 lines. Similarly, Facilities Dive reports that the system potentially offers 25 per cent reductions in capital expenditures and floor space compared with traditional CDU arrangements, while operating in ambient temperatures up to 122 °F. In recent major deployments, tech giants like Microsoft, Amazon and Meta have outlined plans for gigawatt‑scale data‑centre campuses fuelled by ultra‑dense racks drawing over 1 MW apiece. As server rack densities accelerate, cooling infrastructure must not only follow but anticipate demand. LiquidStack's GigaModular CDU addresses that by enabling modular real‑time expansion rather than upfront oversizing, a shift that can significantly reduce both capital and operational costs. LiquidStack also continues to diversify its cooling offerings. Alongside single‑phase direct‑to‑chip solutions, the firm supports two‑phase immersion systems tailored for extreme‑density environments and retrofits. These complement its established MacroModular, MicroModular and DataTank offerings, all of which have been deployed for hyperscale, edge and co‑location environments. ADVERTISEMENT The global demand for liquid‑cooling data‑centre infrastructure is projected to grow from approximately $5.17 billion in 2025 to $15.75 billion by 2030. This expansion is driven by the thermal output of AI workloads, energy‑efficiency regulations, and the push for sustainable, water‑efficient operations. Liquid‑cooling offers far higher thermal transfer efficiency than air cooling, dramatically reducing power usage effectiveness and embodied environmental cost. While the GigaModular marks a significant technical milestone, its commercial rollout will be decisive. Lessons from other capital‑intensive systems suggest that long procurement cycles, site readiness, and integration challenges may slow adoption. LiquidStack's provision of pre‑configured skids aims to mitigate such integration hurdles, and its September 2025 quoting timeline aligns with projected first shipments however, it remains to be seen how quickly hyperscalers and data‑centre operators can adopt the system at scale. Several broader industry trends support strong uptake potential. Nvidia has forecast that rack power densities would reach 600 kW by 2027, while data‑centre developers are aggressively expanding capacity in response to AI‑compute demand. As sustainability pressure mounts, liquid‑cooling solutions gain favour since they drastically cut cooling‑related energy consumption and can facilitate heat reuse in co‑generation setups. Risks remain—some operators may hesitate over the upfront investment, and supply‑chain constraints for pump modules or specialised control systems could affect delivery timelines. However, the pay‑as‑you‑grow modular model and forward‑thinking design appear to position LiquidStack favourably within evolving market dynamics. The GigaModular also contributes to a renewed focus on automation and real‑time telemetry in data‑centre thermal management. Centralised instrumentation provides operators data‑driven insights, while scalable pump architectures decouple deployment capacity from physical footprint constraints—critical in dense, high‑performance computing environments. LiquidStack's latest manufacturing footprint expansion in Texas underscores its plan to scale production. The company had earlier announced a second manufacturing facility there in March 2025, to support its growing direct‑to‑chip and immersion‑cooling roadmap. In effect, LiquidStack is framing this launch not merely as a new cooling unit but as a strategic pivot in liquid‑cooling architecture: one that is standardised, modular and adaptable at hyperscale. As AI‑driven compute demand accelerates beyond petaflops toward exascale, liquid‑cooling infrastructure must evolve in tandem.


Tahawul Tech
2 days ago
- Tahawul Tech
Nintendo Switch 2 Archives
"We're entering this transition from a position of strength and bringing real-world experience to meet the demands of the AI factory". Learn more about @Vertiv's alignment with @nvidia below. #Vertiv #NVIDIA #tahawultech


Arabian Post
3 days ago
- Arabian Post
Inventec, Nvidia, and Solomon Unveil AI-Enhanced Server Production System
Inventec has partnered with Nvidia and Solomon to introduce a collaborative robotic system aimed at revolutionising server manufacturing. This initiative integrates Nvidia's Isaac cuMotion acceleration libraries, Solomon's AI-driven optical inspection software, and Inventec's proprietary Manufacturing Execution System , resulting in a streamlined, intelligent production line. The collaborative robotic arm system, commonly referred to as a cobot, is designed to enhance automation in server assembly. By embedding Nvidia's Isaac cuMotion, the system achieves up to eightfold improvements in complex motion planning speeds and reduces robotic singularity-related interference errors by 50%. This advancement not only accelerates production but also enhances the precision of automated inspections. Solomon's contribution lies in its AI-powered optical inspection software, which employs few-shot learning techniques. This approach significantly reduces training time by at least 50% compared to traditional machine vision development processes. The software's rapid learning capabilities enable the system to adapt swiftly to new inspection tasks, maintaining agility in dynamic manufacturing environments. ADVERTISEMENT The integration with Inventec's MES facilitates real-time data uploading, centralised management, and monitoring of the production process. This seamless connectivity ensures that quality control measures are consistently applied, and manufacturing costs are effectively managed. Since implementing this system, Inventec has reported improvements in product quality and a reduction in production expenses. The collaborative effort between Inventec, Nvidia, and Solomon represents a significant step towards the adoption of AI-driven smart manufacturing. By combining advanced robotics, machine learning, and integrated systems management, the partnership aims to set new standards in server production efficiency and quality assurance.