
MiTAC Computing OCP-powered Direct Liquid Cooling Server Named a Finalist at Interop Tokyo 2025
CHIBA, Japan, June 12, 2025 /PRNewswire/ -- MiTAC Computing Technology Corporation, a leading server platform design, manufacturer, and a subsidiary of MiTAC Holdings Corporation (TSE:3706), will showcase how it's driving the future of sustainable data centers at Interop Tokyo 2025, one of Asia's largest tech events. Joining forces with partners Graid Technology, INFINITIX and Ufi Space, MiTAC displays the latest AI and HPC servers, liquid cooling solutions, as well as OCP server rack integration capabilities at Booth 7T28.
Advancing Sustainable Thermal Solutions
Liquid cooling has undergone a shift from niche to a baseline requirement for new data centers, especially ones supporting heavy AI and HPC workloads. Our MiTAC D50DNP1MHCPLC and MiTAC C2820Z5 are standout examples of direct liquid cooled servers featured at booth 7T28 that meet this demand. Reflecting MiTAC's leadership in advancing sustainable thermal solutions, the MiTAC C2820Z5 has been nominated a Best of Show Award finalist at Interop Tokyo 2025 in the server category.
The MiTAC D50DNP1MHCPLC is a density-optimized half-width 1U liquid-cooled compute module supporting two 4th or 5th Gen Intel Xeon Scalable or Intel Xeon CPU Max Series processors and 16 DDR5 DIMMs. Meanwhile the MiTAC C2820Z5 is an OCP-powered high-density 2OU 4-node dual-socket server that offers high performance computing and reduces server energy consumption and acoustic noise levels while improving power utilization efficiency.
Unveiling Next-gen AI and HPC servers
MiTAC Computing will also display its next-level AI and HPC server platforms such as the MiTAC G8825Z5, an 8U powerhouse featuring dual AMD EPYC™ 9005 Series processors and supporting up to 8 AMD Instinct™ MI325X GPUs, offering up to 6TB of DDR5-6400 memory – ideal for large-scale AI model training and scientific computing.
Ready to deploy at data centers is also the MiTAC G4520G6, featuring dual Intel Xeon 6700P series processors, 8 high-performance GPUs, 32 DDR5-6400 RDIMM slots with memory up to 8TB, and energy-efficient 80+ Titanium-certified power supplies.
Combining Hardware and Software Solutions
Leveraging the MiTAC G4520G6, INFINITIX will demo their AI-Stack technology, a comprehensive AI infrastructure management platform featuring critical GPU partitioning technology that enables stable multi-tasking parallel processing on a single GPU, as well as efficient cross-node computing integration across multiple GPUs.
Leveraging the MiTAC B8261T85E24HR-2T high performance storage server, Graid Technology will showcase its SupremeRAID™ SR-1010, the world's fastest GPU-accelerated NVMe/NVMeoF RAID card designed to eliminate bottlenecks and deliver top-tier performance for AI, ML, and HPC workloads.
OCP Rack-Level Integration Capabilities
MiTAC Computing has played an active role in contributing to the Open Compute Project (OCP) since 2017. At Interop Tokyo 2025, MiTAC partnered with UfiSpace to showcase its OCP rack-level integration capabilities. In addition to several MiTAC OCP servers such as the MiTAC LE2S01, MiTAC Capri v3 servers and the MiTAC C2810Z5, is also featured the UfiSpace S8901-54XC data center switch. The UfiSpace S8901-54XC is a 1RU OCP-compliant white box switch built for high-performance data center environments, offering 48×25G SFP28 and 6×100G QSFP28 ports, powered by Broadcom's Trident3-X5 silicon. The S8901-54XC brings a host of benefits like reducing infrastructure and operational overhead, enhancing system reliability and more.
As part of the OCP ecosystem, UfiSpace and MiTAC Computing both enable greater interoperability and support the transition to open, disaggregated network architectures – the data centers of tomorrow.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Korea Herald
6 days ago
- Korea Herald
MSI Showcases DC-MHS and MGX Server Platforms for Cloud-Scale and AI Infrastructure at OCP APAC 2025
TAIPEI, Aug. 5, 2025 /PRNewswire/ -- At OCP APAC 2025 (Booth S04), MSI, a global leader in high-performance server solutions, presents modular server platforms for modern data center needs. The lineup includes AMD DC-MHS servers for 21" ORv3 and 19" EIA racks, built for scalable and energy-efficient cloud infrastructure, and a NVIDIA MGX-based GPU server optimized for high-density AI workloads such as LLM training and inference. These platforms demonstrate how MSI is powering what's next in computing with open, modular, and workload-optimized infrastructure. "Open and modular infrastructure is shaping the future of compute. With OCP-aligned and MGX-based platforms, MSI helps customers reduce complexity, accelerate scale-out, and prepare for the demands of cloud-native and AI-driven environments." – Danny Hsu, General Manager of MSI's Enterprise Platform Solutions AMD DC-MHS Platforms for Cloud Infrastructure Powered by a single AMD EPYC™ 9005 processor and up to 12 DDR5 DIMM slots per node, MSI's DC-MHS open compute and core compute platforms deliver strong compute performance and high memory bandwidth to meet the demands of data-intensive and parallel workloads. Built on the modular OCP DC-MHS architecture and equipped with DC-SCM2 management modules, these systems offer cross-vendor interoperability, streamlined integration, and easier serviceability, ideal for modern, scalable infrastructure in hyperscale and cloud environments. The CD281-S4051-X2 targets 21" ORv3 rack deployments with 48Vdc power, featuring a 2OU 2-node design and EVAC cooling that supports up to 500W TDP per node. With 12 E3.S PCIe 5.0 NVMe bays per node, it offers high-density, front-access storage for throughput-heavy applications. The CD270-S4051-X4 fits into a standard 2U 4-node 19" EIA chassis, maximizing compute density for environments with limited rack space. Supporting up to 400W air-cooled or 500W liquid-cooled CPUs, and equipped with front-access U.2 NVMe bays, it's built for flexible deployment across general-purpose and scale-out workloads. Built on the NVIDIA MGX modular architecture, the CG480-S5063 is optimized for large-scale AI workloads with a 2:8:5 CPU:GPU:NIC topology. It supports dual Intel® Xeon® 6 processors and up to eight 600W FHFL dual-width GPUs, including NVIDIA H200 NVL and RTX PRO 6000 Blackwell Server Edition. With 32 DDR5 DIMMs and 20 PCIe 5.0 E1.S bays, it delivers high compute density, fast storage, and modular scalability for next-gen AI infrastructure.


Korea Herald
24-07-2025
- Korea Herald
DIGITIMES ASIA: Meta-backed OCP pushes open alternative to closed AI hardware ecosystem
TAIPEI, July 24, 2025 /PRNewswire/ -- As surging demand threatens to fragment the supply chain into costly proprietary silos, the Open Compute Project, an nonprofit organization that has standardized data center hardware for tech giants like Meta and Microsoft, has positioned itself as the antidote to a looming crisis in AI infrastructure, with CEO George Tchaparian pointing out that open hardware standards will prevent market fragmentation as AI infrastructure demands explode. Tchaparian warned that AI's unique requirements are pushing the industry toward dangerous specialization. Without coordinated standards, he argued, the race to build ever-larger AI clusters could splinter the market and drive up costs for everyone. "AI workloads are different, much more so than other virtualized and cloud-native applications," Tchaparian said in a recent interview. "In the pursuit of greater performance, infrastructure specialization for some workload categories is burgeoning, but risks fracturing the supply chain into silos." The stakes are enormous. Current projections suggest AI and high-performance computing buildouts between 2024 and 2028 will push data center power consumption to "dangerously high levels," according to Tchaparian. Annual carbon emissions from these facilities are expected to grow exponentially, creating what he calls a concerning impact on humanity. The Chiplet Gambit OCP's answer lies in what Tchaparian calls the "Open Chiplet Economy" - a standardized marketplace where semiconductor components can be mixed and matched like Lego blocks. The organization has launched a dedicated marketplace section featuring over 25 chiplet suppliers, aiming to create the industry's first truly interoperable silicon ecosystem. "The next inflection point for the silicon supply chain is open," Tchaparian said. "Developing an open stand-alone chiplet silicon supply chain will require a rethink of the supply chain." The approach mirrors OCP's successful standardization of server hardware over the past decade, which helped hyperscalers slash costs and accelerate innovation. Now, with AI clusters demanding unprecedented compute density - including proposed 1-megawatt racks - the organization is betting that open standards can prevent the kind of vendor lock-in that has historically plagued enterprise technology. The strategy carries particular significance for the Asia-Pacific region, where Taiwan's semiconductor manufacturing dominance intersects with surging AI infrastructure demand. OCP counts over 130 APAC members and is preparing for its 2025 summit in Taipei, designed to address regional supply chain challenges. "In APAC, a manufacturing hub, we're engaging through collaborations with local organizations such as ITRI and IOWN," Tchaparian said, referring to Taiwan's Industrial Technology Research Institute and Japan's Innovative Optical and Wireless Network initiative. The timing appears strategic. As geopolitical tensions reshape global technology supply chains, OCP's open approach offers an alternative to the proprietary ecosystems that have dominated AI development. Major players, including Nvidia, Intel, and AMD, are participating in the organization's "Open Systems for AI" initiative, launched in January 2024. Beyond cost and compatibility, Tchaparian frames open standards as essential for addressing AI's environmental impact. OCP has established formal partnerships with organizations like iMasons to develop standardized methods for reporting carbon emissions in IT equipment production - information intended to influence purchasing decisions. "We must reduce the environmental impact of today's computational infrastructure," he said. The organization is even funding research into reducing concrete's carbon footprint, a major source of emissions in data center construction. For companies navigating this transformation, Tchaparian offers straightforward advice: "Eighty percent of success is just showing up." Even without formal membership, organizations can participate in OCP's collaborative development process, gaining access to cutting-edge designs and influence over future standards. The test will come as AI infrastructure demands continue their exponential growth. If OCP succeeds in creating truly interoperable standards, it could prevent the kind of market fragmentation that has historically driven up technology costs. If it fails, the AI boom may leave the industry more fractured - and expensive - than ever. The 2025 OCP APAC Summit will be held in Taipei, featuring discussions on next-generation data center infrastructure and emerging technologies.


Korea Herald
12-06-2025
- Korea Herald
MiTAC Computing OCP-powered Direct Liquid Cooling Server Named a Finalist at Interop Tokyo 2025
Discover the next-gen C2820Z5, G8825Z5 and G4520G6 AI and HPC servers CHIBA, Japan, June 12, 2025 /PRNewswire/ -- MiTAC Computing Technology Corporation, a leading server platform design, manufacturer, and a subsidiary of MiTAC Holdings Corporation (TSE:3706), will showcase how it's driving the future of sustainable data centers at Interop Tokyo 2025, one of Asia's largest tech events. Joining forces with partners Graid Technology, INFINITIX and Ufi Space, MiTAC displays the latest AI and HPC servers, liquid cooling solutions, as well as OCP server rack integration capabilities at Booth 7T28. Advancing Sustainable Thermal Solutions Liquid cooling has undergone a shift from niche to a baseline requirement for new data centers, especially ones supporting heavy AI and HPC workloads. Our MiTAC D50DNP1MHCPLC and MiTAC C2820Z5 are standout examples of direct liquid cooled servers featured at booth 7T28 that meet this demand. Reflecting MiTAC's leadership in advancing sustainable thermal solutions, the MiTAC C2820Z5 has been nominated a Best of Show Award finalist at Interop Tokyo 2025 in the server category. The MiTAC D50DNP1MHCPLC is a density-optimized half-width 1U liquid-cooled compute module supporting two 4th or 5th Gen Intel Xeon Scalable or Intel Xeon CPU Max Series processors and 16 DDR5 DIMMs. Meanwhile the MiTAC C2820Z5 is an OCP-powered high-density 2OU 4-node dual-socket server that offers high performance computing and reduces server energy consumption and acoustic noise levels while improving power utilization efficiency. Unveiling Next-gen AI and HPC servers MiTAC Computing will also display its next-level AI and HPC server platforms such as the MiTAC G8825Z5, an 8U powerhouse featuring dual AMD EPYC™ 9005 Series processors and supporting up to 8 AMD Instinct™ MI325X GPUs, offering up to 6TB of DDR5-6400 memory – ideal for large-scale AI model training and scientific computing. Ready to deploy at data centers is also the MiTAC G4520G6, featuring dual Intel Xeon 6700P series processors, 8 high-performance GPUs, 32 DDR5-6400 RDIMM slots with memory up to 8TB, and energy-efficient 80+ Titanium-certified power supplies. Combining Hardware and Software Solutions Leveraging the MiTAC G4520G6, INFINITIX will demo their AI-Stack technology, a comprehensive AI infrastructure management platform featuring critical GPU partitioning technology that enables stable multi-tasking parallel processing on a single GPU, as well as efficient cross-node computing integration across multiple GPUs. Leveraging the MiTAC B8261T85E24HR-2T high performance storage server, Graid Technology will showcase its SupremeRAID™ SR-1010, the world's fastest GPU-accelerated NVMe/NVMeoF RAID card designed to eliminate bottlenecks and deliver top-tier performance for AI, ML, and HPC workloads. OCP Rack-Level Integration Capabilities MiTAC Computing has played an active role in contributing to the Open Compute Project (OCP) since 2017. At Interop Tokyo 2025, MiTAC partnered with UfiSpace to showcase its OCP rack-level integration capabilities. In addition to several MiTAC OCP servers such as the MiTAC LE2S01, MiTAC Capri v3 servers and the MiTAC C2810Z5, is also featured the UfiSpace S8901-54XC data center switch. The UfiSpace S8901-54XC is a 1RU OCP-compliant white box switch built for high-performance data center environments, offering 48×25G SFP28 and 6×100G QSFP28 ports, powered by Broadcom's Trident3-X5 silicon. The S8901-54XC brings a host of benefits like reducing infrastructure and operational overhead, enhancing system reliability and more. As part of the OCP ecosystem, UfiSpace and MiTAC Computing both enable greater interoperability and support the transition to open, disaggregated network architectures – the data centers of tomorrow.