logo
#

Latest news with #DannyHsu

From Rack Integration to AI and Cloud Systems: MSI Debuts Full-Spectrum Server Portfolio at COMPUTEX 2025
From Rack Integration to AI and Cloud Systems: MSI Debuts Full-Spectrum Server Portfolio at COMPUTEX 2025

Yahoo

time20-05-2025

  • Business
  • Yahoo

From Rack Integration to AI and Cloud Systems: MSI Debuts Full-Spectrum Server Portfolio at COMPUTEX 2025

TAIPEI, May 19, 2025 /PRNewswire/ -- MSI, a global leader in high-performance server solutions, returns to COMPUTEX 2025 (Booth #J0506) with its most comprehensive lineup yet. Showcasing rack-level integration, modular cloud infrastructure, AI-optimized GPU systems, and enterprise server platforms, MSI presents fully integrated EIA, OCP ORv3, and NVIDIA MGX racks, DC-MHS-based Core Compute servers, and the new NVIDIA DGX Station. Together, these systems underscore MSI's growing capability to deliver deployment-ready, workload-tuned infrastructure across hyperscale, cloud, and enterprise environments. "The future of data infrastructure is modular, open, and workload-optimized," said Danny Hsu, General Manager of MSI's Enterprise Platform Solutions. "At COMPUTEX 2025, we're showing how MSI is evolving into a full-stack server provider, delivering integrated platforms that help our customers scale AI, cloud, and enterprise deployments with greater efficiency and flexibility." Full-Rack Integration from Cloud to AI Data CentersMSI demonstrates its rack-level integration expertise with fully configured EIA 19", OCP ORv3 21", and AI rack powered by NVIDIA MGX, engineered to power modern infrastructure, from cloud-native compute to AI-optimized deployments. Pre-integrated and thermally optimized, each rack is deployment-ready and tuned for specific workloads. Together, they highlight MSI's capability to deliver complete, workload-optimized infrastructure from design to deployment. The EIA rack delivers dense compute for private cloud and virtualization environments, integrating core infrastructure in a standard 19" format. The OCP ORv3 rack features a 21" open chassis, enabling higher compute and storage density, efficient 48V power delivery, and OpenBMC-compatible management, ideal for hyperscale and software-defined data centers. The enterprise AI rack with NVIDIA MGX, built on the NVIDIA Enterprise Reference Architecture, enables scalable GPU infrastructure for AI and HPC. Featuring modular units and high-throughput networking powered by NVIDIA Spectrum™-X, it supports multi-node scalable unit deployments optimized for large-scale training, inference, and hybrid workloads. Core Compute and Open Compute Servers for Modular Cloud InfrastructureMSI expands its Core Compute lineup with six DC-MHS servers powered by AMD EPYC 9005 Series and Intel Xeon 6 processors in 2U4N and 2U2N configurations. Designed for scalable cloud deployments, the portfolio includes high-density nodes with liquid or air cooling and compact systems optimized for power and space efficiency. With support for OCP DC-SCM, PCIe 5.0, and DDR5 DRAM, these servers enable modular, cross-platform integration and simplified management across private, hybrid, and edge cloud environments. To further enhance Open Compute deployment flexibility, MSI introduces the CD281-S4051-X2, a 2OU 2-Node ORv3 Open Compute server based on DC-MHS architecture. Optimized for hyperscale cloud infrastructure, it supports a single AMD EPYC 9005 processor per node, offers high storage density with twelve E3.S NVMe bays per node, and integrates efficient 48V power delivery and OpenBMC-compatible management, making it ideal for software-defined and power-conscious cloud environments. AMD EPYC 9005 Series Processor-Based Platform for Dense Virtualization and Scale-Out Workloads CD270-S4051-X4 (Liquid Cooling)A liquid cooled 2U 4-Node server supporting up to 500W TDP. Each node features 12 DDR5 DIMM slots and 2 U.2 NVMe drive bays, ideal for dense compute in thermally constrained cloud deployments. CD270-S4051-X4 (Air Cooling)This air-cooled 2U 4-Node system supports up to 400W TDP and delivers energy-efficient compute, with 12 DDR5 DIMM slots and 3 U.2 NVMe bays per node. Designed for virtualization, container hosting, and private cloud clusters. CD270-S4051-X2A 2U 2-Node server optimized for space efficiency and compute density. Each node includes 12 DDR5 DIMM slots and 6 U.2 NVMe bays, making it suitable for general-purpose virtualization and edge cloud nodes. Intel Xeon 6 Processor-Based Platform for Containerized and General-Purpose Cloud Services CD270-S3061-X4A 2U 4-Node Intel Xeon 6700/6500 server supporting 16 DDR5 DIMM slots and 3 U.2 NVMe bays per node. Ideal for containerized services and mixed cloud workloads requiring balanced compute density. CD270-S3061-X2This compact 2U 2-Node Intel Xeon 6700/6500 system features 16 DDR5 DIMM slots and 6 U.2 NVMe bays per node, delivering strong compute and storage capabilities for core infrastructure and scalable cloud services. CD270-S3071-X2A 2U 2-Node Intel Xeon 6900 system designed for I/O-heavy workloads, with 12 DDR5 DIMM slots and 6 U.2 bays per node. Suitable for storage-centric applications and data-intensive applications in the cloud. AI Platforms with NVIDIA MGX & DGX Station for AI DeploymentMSI presents a comprehensive lineup of AI-ready platforms, including NVIDIA MGX-based servers and the DGX Station built on NVIDIA Grace and Blackwell architecture. The MGX lineup spans 4U and 2U form factors optimized for high-density AI training and inference, while the DGX Station delivers datacenter-class performance in a desktop chassis for on-premises model development and edge AI deployment. AI Platforms with NVIDIA MGX CG480-S5063 (Intel) / CG480-S6053 (AMD)The 4U MGX GPU server is available in two CPU configurations, CG480-S5063 with dual Intel Xeon 6700/6500 processors, and CG480-S6053 with dual AMD EPYC 9005 Series processors, offering flexibility across CPU ecosystems. Both systems support up to 8 FHFL dual-width PCIe 5.0 GPUs in air-cooled datacenter environments, making them ideal for deep learning training, generative AI, and high-throughput Intel-based CG480-S5063 features 32 DDR5 DIMM slots and supports up to 20 front E1.S NVMe bays, ideal for memory- and I/O-intensive deep learning pipelines, including large-scale LLM workloads, NVIDIA OVX™, and digital twin simulations. CG290-S3063A compact 2U MGX server powered by a single Intel Xeon 6700/6500 processor, supporting 16 DDR5 DIMM slots and 4 FHFL dual-width GPU slots. Designed for edge inferencing and lightweight AI training, it suits space-constrained deployments where inference latency and power efficiency are key. DGX Station The CT60-S8060 is a high-performance AI station built on the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, delivering up to 20 PFLOPS of AI performance and 784GB of unified memory. It also features the NVIDIA ConnectX-8 SuperNIC, enabling up to 800Gb/s networking for high-speed data transfer and multi-node scaling. Designed for on-prem model training and inferencing, the system supports multi-user workloads and can operate as a standalone AI workstation or a centralized compute resource for R&D teams. View original content to download multimedia: SOURCE MSI

MSI launches scalable AI server solutions with NVIDIA technology
MSI launches scalable AI server solutions with NVIDIA technology

Techday NZ

time19-05-2025

  • Business
  • Techday NZ

MSI launches scalable AI server solutions with NVIDIA technology

MSI has introduced new AI server solutions using NVIDIA MGX and NVIDIA DGX Station reference architectures designed to support the expanding requirements of enterprise, HPC, and accelerated computing workloads. The company's new server platforms feature modular and scalable building blocks aimed at addressing increasing AI demands in both enterprise and cloud data centre environments. Danny Hsu, General Manager of Enterprise Platform Solutions at MSI, said, "AI adoption is transforming enterprise data centers as organizations move quickly to integrate advanced AI capabilities. With the explosive growth of generative AI and increasingly diverse workloads, traditional servers can no longer keep pace. MSI's AI solutions, built on the NVIDIA MGX and NVIDIA DGX Station reference architectures, deliver the scalability, flexibility, and performance enterprises need to future-proof their infrastructure and accelerate their AI innovation." One of the main highlights is a rack solution based on the NVIDIA Enterprise Reference Architecture, comprising a four-node scalable unit constructed on the MSI AI server utilising NVIDIA MGX. Each server in this solution contains eight NVIDIA H200 NVL GPUs, further enhanced by the NVIDIA Spectrum-X networking platform to enable scalable AI workloads. This modular setup provides the capability to expand to a maximum of 32 server systems, meaning up to 256 NVIDIA H200 NVL GPUs can be supported within a single deployment. MSI states that this architecture is optimised for multi-node AI and hybrid applications and is designed to support complex computational tasks expected in the latest data centre operations. It is built to accommodate a range of use cases, including those leveraging large language models and other demanding AI workloads. The AI server platforms have been constructed using the NVIDIA MGX modular architecture, establishing a foundation for accelerated computing in AI, HPC, and NVIDIA Omniverse contexts. The MSI 4U AI server provides configuration options using either Intel or AMD CPUs, aimed at large-scale AI projects such as deep learning training and model fine-tuning. The CG480-S5063 platform features dual Intel Xeon 6 processors and eight full-height, full-length dual-width GPU slots that support NVIDIA H200 NVL and NVIDIA RTX PRO 6000 Blackwell Server Edition, with power capacities up to 600W. It offers 32 DDR5 DIMM slots and twenty PCIe 5.0 E1.S NVMe bays for high memory bandwidth and rapid data access, with its modular design supporting both storage needs and scalability. Another server, the CG290-S3063, is a 2U AI platform also constructed on NVIDIA MGX architecture. It includes a single-socket Intel Xeon 6 processor, 16 DDR5 DIMM slots, and four GPU slots with up to 600W capacity. The CG290-S3063 incorporates PCIe 5.0 expansion, four rear 2.5-inch NVMe bays, and two M.2 NVMe slots to provide support for various AI tasks, from smaller-scale inference to extensive AI training workloads. MSI's server platforms have been designed for deployment within enterprise-grade AI environments, offering support for the NVIDIA Enterprise AI Factory validated design. This structure provides enterprises with guidance in developing, deploying, and managing AI—including agentic AI and physical AI—as well as high-performance computing tasks on the NVIDIA Blackwell platform using their own infrastructure. The validated design combines accelerated computing, networking, storage, and software components for faster deployment and risk mitigation in AI factory roll-outs. MSI is also presenting the AI Station CT60-S8060, a workstation built on the NVIDIA DGX Station reference, with components designed to enable data centre-grade AI performance from a desktop environment. This includes the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip and up to 784GB of coherent memory, intended to boost large-scale training and inference. The solution is targeted at teams requiring a high-performance desktop AI development environment and integrates the NVIDIA AI Enterprise software stack for system capability management.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store