logo
#

Latest news with #NVIDIAMGX

From Rack Integration to AI and Cloud Systems: MSI Debuts Full-Spectrum Server Portfolio at COMPUTEX 2025
From Rack Integration to AI and Cloud Systems: MSI Debuts Full-Spectrum Server Portfolio at COMPUTEX 2025

Yahoo

time20-05-2025

  • Business
  • Yahoo

From Rack Integration to AI and Cloud Systems: MSI Debuts Full-Spectrum Server Portfolio at COMPUTEX 2025

TAIPEI, May 19, 2025 /PRNewswire/ -- MSI, a global leader in high-performance server solutions, returns to COMPUTEX 2025 (Booth #J0506) with its most comprehensive lineup yet. Showcasing rack-level integration, modular cloud infrastructure, AI-optimized GPU systems, and enterprise server platforms, MSI presents fully integrated EIA, OCP ORv3, and NVIDIA MGX racks, DC-MHS-based Core Compute servers, and the new NVIDIA DGX Station. Together, these systems underscore MSI's growing capability to deliver deployment-ready, workload-tuned infrastructure across hyperscale, cloud, and enterprise environments. "The future of data infrastructure is modular, open, and workload-optimized," said Danny Hsu, General Manager of MSI's Enterprise Platform Solutions. "At COMPUTEX 2025, we're showing how MSI is evolving into a full-stack server provider, delivering integrated platforms that help our customers scale AI, cloud, and enterprise deployments with greater efficiency and flexibility." Full-Rack Integration from Cloud to AI Data CentersMSI demonstrates its rack-level integration expertise with fully configured EIA 19", OCP ORv3 21", and AI rack powered by NVIDIA MGX, engineered to power modern infrastructure, from cloud-native compute to AI-optimized deployments. Pre-integrated and thermally optimized, each rack is deployment-ready and tuned for specific workloads. Together, they highlight MSI's capability to deliver complete, workload-optimized infrastructure from design to deployment. The EIA rack delivers dense compute for private cloud and virtualization environments, integrating core infrastructure in a standard 19" format. The OCP ORv3 rack features a 21" open chassis, enabling higher compute and storage density, efficient 48V power delivery, and OpenBMC-compatible management, ideal for hyperscale and software-defined data centers. The enterprise AI rack with NVIDIA MGX, built on the NVIDIA Enterprise Reference Architecture, enables scalable GPU infrastructure for AI and HPC. Featuring modular units and high-throughput networking powered by NVIDIA Spectrum™-X, it supports multi-node scalable unit deployments optimized for large-scale training, inference, and hybrid workloads. Core Compute and Open Compute Servers for Modular Cloud InfrastructureMSI expands its Core Compute lineup with six DC-MHS servers powered by AMD EPYC 9005 Series and Intel Xeon 6 processors in 2U4N and 2U2N configurations. Designed for scalable cloud deployments, the portfolio includes high-density nodes with liquid or air cooling and compact systems optimized for power and space efficiency. With support for OCP DC-SCM, PCIe 5.0, and DDR5 DRAM, these servers enable modular, cross-platform integration and simplified management across private, hybrid, and edge cloud environments. To further enhance Open Compute deployment flexibility, MSI introduces the CD281-S4051-X2, a 2OU 2-Node ORv3 Open Compute server based on DC-MHS architecture. Optimized for hyperscale cloud infrastructure, it supports a single AMD EPYC 9005 processor per node, offers high storage density with twelve E3.S NVMe bays per node, and integrates efficient 48V power delivery and OpenBMC-compatible management, making it ideal for software-defined and power-conscious cloud environments. AMD EPYC 9005 Series Processor-Based Platform for Dense Virtualization and Scale-Out Workloads CD270-S4051-X4 (Liquid Cooling)A liquid cooled 2U 4-Node server supporting up to 500W TDP. Each node features 12 DDR5 DIMM slots and 2 U.2 NVMe drive bays, ideal for dense compute in thermally constrained cloud deployments. CD270-S4051-X4 (Air Cooling)This air-cooled 2U 4-Node system supports up to 400W TDP and delivers energy-efficient compute, with 12 DDR5 DIMM slots and 3 U.2 NVMe bays per node. Designed for virtualization, container hosting, and private cloud clusters. CD270-S4051-X2A 2U 2-Node server optimized for space efficiency and compute density. Each node includes 12 DDR5 DIMM slots and 6 U.2 NVMe bays, making it suitable for general-purpose virtualization and edge cloud nodes. Intel Xeon 6 Processor-Based Platform for Containerized and General-Purpose Cloud Services CD270-S3061-X4A 2U 4-Node Intel Xeon 6700/6500 server supporting 16 DDR5 DIMM slots and 3 U.2 NVMe bays per node. Ideal for containerized services and mixed cloud workloads requiring balanced compute density. CD270-S3061-X2This compact 2U 2-Node Intel Xeon 6700/6500 system features 16 DDR5 DIMM slots and 6 U.2 NVMe bays per node, delivering strong compute and storage capabilities for core infrastructure and scalable cloud services. CD270-S3071-X2A 2U 2-Node Intel Xeon 6900 system designed for I/O-heavy workloads, with 12 DDR5 DIMM slots and 6 U.2 bays per node. Suitable for storage-centric applications and data-intensive applications in the cloud. AI Platforms with NVIDIA MGX & DGX Station for AI DeploymentMSI presents a comprehensive lineup of AI-ready platforms, including NVIDIA MGX-based servers and the DGX Station built on NVIDIA Grace and Blackwell architecture. The MGX lineup spans 4U and 2U form factors optimized for high-density AI training and inference, while the DGX Station delivers datacenter-class performance in a desktop chassis for on-premises model development and edge AI deployment. AI Platforms with NVIDIA MGX CG480-S5063 (Intel) / CG480-S6053 (AMD)The 4U MGX GPU server is available in two CPU configurations, CG480-S5063 with dual Intel Xeon 6700/6500 processors, and CG480-S6053 with dual AMD EPYC 9005 Series processors, offering flexibility across CPU ecosystems. Both systems support up to 8 FHFL dual-width PCIe 5.0 GPUs in air-cooled datacenter environments, making them ideal for deep learning training, generative AI, and high-throughput Intel-based CG480-S5063 features 32 DDR5 DIMM slots and supports up to 20 front E1.S NVMe bays, ideal for memory- and I/O-intensive deep learning pipelines, including large-scale LLM workloads, NVIDIA OVX™, and digital twin simulations. CG290-S3063A compact 2U MGX server powered by a single Intel Xeon 6700/6500 processor, supporting 16 DDR5 DIMM slots and 4 FHFL dual-width GPU slots. Designed for edge inferencing and lightweight AI training, it suits space-constrained deployments where inference latency and power efficiency are key. DGX Station The CT60-S8060 is a high-performance AI station built on the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, delivering up to 20 PFLOPS of AI performance and 784GB of unified memory. It also features the NVIDIA ConnectX-8 SuperNIC, enabling up to 800Gb/s networking for high-speed data transfer and multi-node scaling. Designed for on-prem model training and inferencing, the system supports multi-user workloads and can operate as a standalone AI workstation or a centralized compute resource for R&D teams. View original content to download multimedia: SOURCE MSI

MSI launches scalable AI server solutions with NVIDIA technology
MSI launches scalable AI server solutions with NVIDIA technology

Techday NZ

time19-05-2025

  • Business
  • Techday NZ

MSI launches scalable AI server solutions with NVIDIA technology

MSI has introduced new AI server solutions using NVIDIA MGX and NVIDIA DGX Station reference architectures designed to support the expanding requirements of enterprise, HPC, and accelerated computing workloads. The company's new server platforms feature modular and scalable building blocks aimed at addressing increasing AI demands in both enterprise and cloud data centre environments. Danny Hsu, General Manager of Enterprise Platform Solutions at MSI, said, "AI adoption is transforming enterprise data centers as organizations move quickly to integrate advanced AI capabilities. With the explosive growth of generative AI and increasingly diverse workloads, traditional servers can no longer keep pace. MSI's AI solutions, built on the NVIDIA MGX and NVIDIA DGX Station reference architectures, deliver the scalability, flexibility, and performance enterprises need to future-proof their infrastructure and accelerate their AI innovation." One of the main highlights is a rack solution based on the NVIDIA Enterprise Reference Architecture, comprising a four-node scalable unit constructed on the MSI AI server utilising NVIDIA MGX. Each server in this solution contains eight NVIDIA H200 NVL GPUs, further enhanced by the NVIDIA Spectrum-X networking platform to enable scalable AI workloads. This modular setup provides the capability to expand to a maximum of 32 server systems, meaning up to 256 NVIDIA H200 NVL GPUs can be supported within a single deployment. MSI states that this architecture is optimised for multi-node AI and hybrid applications and is designed to support complex computational tasks expected in the latest data centre operations. It is built to accommodate a range of use cases, including those leveraging large language models and other demanding AI workloads. The AI server platforms have been constructed using the NVIDIA MGX modular architecture, establishing a foundation for accelerated computing in AI, HPC, and NVIDIA Omniverse contexts. The MSI 4U AI server provides configuration options using either Intel or AMD CPUs, aimed at large-scale AI projects such as deep learning training and model fine-tuning. The CG480-S5063 platform features dual Intel Xeon 6 processors and eight full-height, full-length dual-width GPU slots that support NVIDIA H200 NVL and NVIDIA RTX PRO 6000 Blackwell Server Edition, with power capacities up to 600W. It offers 32 DDR5 DIMM slots and twenty PCIe 5.0 E1.S NVMe bays for high memory bandwidth and rapid data access, with its modular design supporting both storage needs and scalability. Another server, the CG290-S3063, is a 2U AI platform also constructed on NVIDIA MGX architecture. It includes a single-socket Intel Xeon 6 processor, 16 DDR5 DIMM slots, and four GPU slots with up to 600W capacity. The CG290-S3063 incorporates PCIe 5.0 expansion, four rear 2.5-inch NVMe bays, and two M.2 NVMe slots to provide support for various AI tasks, from smaller-scale inference to extensive AI training workloads. MSI's server platforms have been designed for deployment within enterprise-grade AI environments, offering support for the NVIDIA Enterprise AI Factory validated design. This structure provides enterprises with guidance in developing, deploying, and managing AI—including agentic AI and physical AI—as well as high-performance computing tasks on the NVIDIA Blackwell platform using their own infrastructure. The validated design combines accelerated computing, networking, storage, and software components for faster deployment and risk mitigation in AI factory roll-outs. MSI is also presenting the AI Station CT60-S8060, a workstation built on the NVIDIA DGX Station reference, with components designed to enable data centre-grade AI performance from a desktop environment. This includes the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip and up to 784GB of coherent memory, intended to boost large-scale training and inference. The solution is targeted at teams requiring a high-performance desktop AI development environment and integrates the NVIDIA AI Enterprise software stack for system capability management.

MiTAC Computing Launches the Latest Scale-out AI Server G4527G6 by NVIDIA MGX at COMPUTEX 2025
MiTAC Computing Launches the Latest Scale-out AI Server G4527G6 by NVIDIA MGX at COMPUTEX 2025

Korea Herald

time19-05-2025

  • Business
  • Korea Herald

MiTAC Computing Launches the Latest Scale-out AI Server G4527G6 by NVIDIA MGX at COMPUTEX 2025

Featuring Next-gen NVIDIA' SuperNIC TAIPEI, May 19, 2025 /PRNewswire/ -- MiTAC Computing Technology Corporation, a leading server platform design, manufacturer, and a subsidiary of MiTAC Holdings Corporation (TSE:3706), will present its latest innovations in AI infrastructure at COMPUTEX 2025. At booth M1110, MiTAC Computing will display its next-level AI server platforms MiTAC G4527G6, fully optimized for NVIDIA MGX architecture, which supports NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and the NVIDIA H200 NVL platform to address the evolving demands of enterprise AI workloads. Next-Gen AI with High-Performance Computing With the increasing adoption of generative AI and accelerated computing, MiTAC Computing introduces the latest NVIDIA MGX-based server solution, the MiTAC G4527G6, designed to support complex AI and high-performance computing (HPC) workloads. Built on Intel® Xeon® 6 processors, the G4527G6 accommodates up to eight NVIDIA GPUs, 8TB of DDR5-6400 memory, sixteen hot-swappable E1.s drives, and an NVIDIA BlueField -3 DPU for efficient north-south connectivity. Crucially, it integrates four next-generation NVIDIA ConnectX -8 SuperNICs, delivering up to 800 gigabits per second (Gb/s) of NVIDIA InfiniBand and Ethernet networking—significantly enhancing system performance for AI factories and cloud data center environments. As a key part of NVIDIA's AI networking portfolio, the NVIDIA ConnectX-8 SuperNIC delivers robust and scalable connectivity with advanced congestion control and In-Network Computing via NVIDIA SHARP, optimizing throughput for training, inference, and trillion-parameter AI workloads in sustainable, GPU-dense environments. Powering the NVIDIA Enterprise AI Factory with Scalable Infrastructure As data centers become the modern computers of the world, MiTAC Computing stands alongside NVIDIA in building enterprise AI factories with an on-premises, full-stack platform optimized for next-gen enterprise AI. MiTAC Computing's G4527G6 AI server is a standout example built on the modular NVIDIA MGX architecture, delivering over 100 customizable configurations to accelerate AI factories. The MiTAC G4527G6 RTX PRO Blackwell server integrates NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs — part of the new NVIDIA Enterprise AI Factory validated design — or NVIDIA H200 NVL GPUs, which deliver up to 1.8X faster LLM inference and 1.3X improved HPC performance over the previous generation. This robust configuration is designed to support a wide range of AI-enabled enterprise applications, agentic and physical AI workflows, autonomous decision-making, and real-time data analysis – laying the foundation for the intelligent enterprises of tomorrow. Join MiTAC Computing at COMPUTEX 2025 – Booth M1110 Preview our COMPUTEX 2025 new launches: About MiTAC Computing Technology Corporation MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings, delivers comprehensive, energy-efficient server solutions backed by industry expertise dating back to the 1990s. Specializing in AI, HPC, cloud, and edge computing, MiTAC Computing employs rigorous methods to ensure uncompromising quality not just at the barebone level but, more importantly, at the system and rack levels—where true performance and integration matter most. This commitment to quality at every level sets MiTAC Computing apart from others in the industry. The company provides tailored platforms for hyperscale data centers, HPC, and AI applications, guaranteeing optimal performance and scalability. With a global presence and end-to-end capabilities—from R&D and manufacturing to global support—MiTAC Computing offers flexible, high-quality solutions designed to meet unique business needs. Leveraging the latest advancements in AI and liquid cooling, along with the recent integration of Intel DSG and TYAN server products, MiTAC Computing stands out for its innovation, efficiency, and reliability, empowering businesses to tackle future challenges.

MiTAC Computing Launches the Latest Scale-out AI Server G4527G6 by NVIDIA MGX at COMPUTEX 2025
MiTAC Computing Launches the Latest Scale-out AI Server G4527G6 by NVIDIA MGX at COMPUTEX 2025

Cision Canada

time19-05-2025

  • Business
  • Cision Canada

MiTAC Computing Launches the Latest Scale-out AI Server G4527G6 by NVIDIA MGX at COMPUTEX 2025

TAIPEI, May 19, 2025 /CNW/ -- MiTAC Computing Technology Corporation, a leading server platform design, manufacturer, and a subsidiary of MiTAC Holdings Corporation (TSE:3706), will present its latest innovations in AI infrastructure at COMPUTEX 2025. At booth M1110, MiTAC Computing will display its next-level AI server platforms MiTAC G4527G6, fully optimized for NVIDIA MGX architecture, which supports NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and the NVIDIA H200 NVL platform to address the evolving demands of enterprise AI workloads. Next-Gen AI with High-Performance Computing With the increasing adoption of generative AI and accelerated computing, MiTAC Computing introduces the latest NVIDIA MGX-based server solution, the MiTAC G4527G6, designed to support complex AI and high-performance computing (HPC) workloads. Built on Intel® Xeon® 6 processors, the G4527G6 accommodates up to eight NVIDIA GPUs, 8TB of DDR5-6400 memory, sixteen hot-swappable E1.s drives, and an NVIDIA BlueField -3 DPU for efficient north-south connectivity. Crucially, it integrates four next-generation NVIDIA ConnectX -8 SuperNICs, delivering up to 800 gigabits per second (Gb/s) of NVIDIA InfiniBand and Ethernet networking—significantly enhancing system performance for AI factories and cloud data center environments. As a key part of NVIDIA's AI networking portfolio, the NVIDIA ConnectX-8 SuperNIC delivers robust and scalable connectivity with advanced congestion control and In-Network Computing via NVIDIA SHARP, optimizing throughput for training, inference, and trillion-parameter AI workloads in sustainable, GPU-dense environments. Powering the NVIDIA Enterprise AI Factory with Scalable Infrastructure As data centers become the modern computers of the world, MiTAC Computing stands alongside NVIDIA in building enterprise AI factories with an on-premises, full-stack platform optimized for next-gen enterprise AI. MiTAC Computing's G4527G6 AI server is a standout example built on the modular NVIDIA MGX architecture, delivering over 100 customizable configurations to accelerate AI factories. The MiTAC G4527G6 RTX PRO Blackwell server integrates NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs — part of the new NVIDIA Enterprise AI Factory validated design — or NVIDIA H200 NVL GPUs, which deliver up to 1.8X faster LLM inference and 1.3X improved HPC performance over the previous generation. This robust configuration is designed to support a wide range of AI-enabled enterprise applications, agentic and physical AI workflows, autonomous decision-making, and real-time data analysis – laying the foundation for the intelligent enterprises of tomorrow. Join MiTAC Computing at COMPUTEX 2025 – Booth M1110 Preview our COMPUTEX 2025 new launches: About MiTAC Computing Technology Corporation MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings, delivers comprehensive, energy-efficient server solutions backed by industry expertise dating back to the 1990s. Specializing in AI, HPC, cloud, and edge computing, MiTAC Computing employs rigorous methods to ensure uncompromising quality not just at the barebone level but, more importantly, at the system and rack levels—where true performance and integration matter most. This commitment to quality at every level sets MiTAC Computing apart from others in the industry. The company provides tailored platforms for hyperscale data centers, HPC, and AI applications, guaranteeing optimal performance and scalability. With a global presence and end-to-end capabilities—from R&D and manufacturing to global support—MiTAC Computing offers flexible, high-quality solutions designed to meet unique business needs. Leveraging the latest advancements in AI and liquid cooling, along with the recent integration of Intel DSG and TYAN server products, MiTAC Computing stands out for its innovation, efficiency, and reliability, empowering businesses to tackle future challenges.

Compal to Unveil Next-Generation AI-HPC NVIDIA MGX-Based Servers at GTC 2025
Compal to Unveil Next-Generation AI-HPC NVIDIA MGX-Based Servers at GTC 2025

Associated Press

time14-03-2025

  • Business
  • Associated Press

Compal to Unveil Next-Generation AI-HPC NVIDIA MGX-Based Servers at GTC 2025

SAN JOSE, Calif., March 14, 2025 /PRNewswire/ -- Compal Electronics (Compal; Stock Ticker: a global leader in IT and computing solutions, is set to participate in NVIDIA GTC 2025, where it will showcase its latest advancements in AI-HPC GPU server solutions. Taking place on March 18, 2025, at the San Jose Convention Center, GTC 2025 is the premier platform for breakthrough AI, high-performance computing, and enterprise innovations. At this highly anticipated event, Compal will unveil several high-performance GPU servers focused on optimizing AI training, accelerating data center operations, and supporting large-scale HPC applications. Attendees will have the opportunity to preview GPU servers based on the NVIDIA MGX architecture, along with other high-density computing solutions designed for next-generation computing environments. In addition, Compal has already started shipping servers equipped with the NVIDIA GH200 Grace Hopper™ Superchip, with expectations for larger shipment volumes in the future, reflecting strong market demand and confidence. Compal stated 'GTC 2025 is the ideal platform to showcase our latest AI-HPC innovations, and we are honored to collaborate with industry-leading partners, including power solutions expert AcBel Polytech Inc., chassis leader Chenbro, software development pioneer Infinitix, globally renowned memory manufacturer Samsung Electronics, and ZutaCore®, a leader in zero-emissions data center cooling with HyperCool® technology.' Additionally, Compal's partners will present keynote speeches at the Compal booth, sharing insights into cutting-edge technologies and innovative solutions, as well as exploring future trends in high-performance computing. Expanding AI and 5G Integration Leveraging its deep expertise in 5G RF, PHY layer, and Open Radio Access Network (O-RAN), Compal continues to innovate in wireless communication technology. Recently, Compal launched the AI server SX220-1N, which integrates NVIDIA Aerial with 5G base stations to create an AI RAN solution. By utilizing AI technology, this solution enhances network intelligence and efficiency, enabling cognitive automated testing for 5G systems, improving spectrum utilization, and expanding coverage, ultimately reducing operational costs. As AI applications continue to drive industry transformation, Compal is dedicated to building cutting-edge GPU server architectures to help businesses efficiently handle compute-intensive workloads. Attendees at GTC 2025 will have the opportunity to explore Compal's latest breakthroughs and gain insights into next-generation AI infrastructure. In addition to debuting at GTC 2025, Compal will also showcase its advanced GPU servers at the CloudFest event in Europe, highlighting its global innovation in AI and HPC solutions. For European audiences who are unable to attend GTC 2025, CloudFest offers an alternative opportunity to explore Compal's latest AI and HPC server technologies. About Compal Founded in 1984, Compal is a leading manufacturer in the notebook and smart device industry, creating brand value in collaboration with various sectors. Its groundbreaking product designs have received numerous international awards. In 2024, Compal was recognized by CommonWealth Magazine as one of Taiwan's top 6 manufacturers and has consistently ranked among the Forbes Global 2000 and Fortune Global 500 companies. In recent years, Compal has actively developed emerging businesses, including cloud servers, auto electronics, and smart medical, leveraging its integrated hardware and software R&D and manufacturing capabilities to create relevant solutions. More information on Compal server, please visit

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store