logo
#

Latest news with #UltraAcceleratorLink

Panmnesia Introduces Today's and Tomorrow's AI Infrastructure, Including a Supercluster Architecture That Integrates NVLink, UALink, and HBM via CXL
Panmnesia Introduces Today's and Tomorrow's AI Infrastructure, Including a Supercluster Architecture That Integrates NVLink, UALink, and HBM via CXL

Business Wire

time18-07-2025

  • Business
  • Business Wire

Panmnesia Introduces Today's and Tomorrow's AI Infrastructure, Including a Supercluster Architecture That Integrates NVLink, UALink, and HBM via CXL

DAEJEON, South Korea--(BUSINESS WIRE)--Panmnesia has released a technical report titled 'Compute Can't Handle the Truth: Why Communication Tax Prioritizes Memory and Interconnects in Modern AI Infrastructure.' In this report, Panmnesia outlines the trends in modern AI models, the limitations of current AI infrastructure in handling them, and how emerging memory and interconnect technologies—including Compute Express Link (CXL), NVLink, Ultra Accelerator Link (UALink), and High Bandwidth Memory (HBM)—can be leveraged to improve AI infrastructure. Panmnesia aims to address the current challenges in AI infrastructure, by building flexible, scalable, and communication-efficient architecture using diverse interconnect technologies, instead of fixed GPU-based configurations. Panmnesia's CEO, Dr. Myoungsoo Jung, explained, 'This technical report was written to more clearly and accessibly share the ideas on AI infrastructure that we presented during a keynote last August. We aimed to explain AI and large language models (LLMs) in a way that even readers without deep technical backgrounds could understand. We also explored how AI infrastructure may evolve in the future, considering the unique characteristics of AI services.' He added, 'We hope this report proves helpful to those interested in the field.' Overview of the Technical Report Panmnesia's technical report is divided into three main parts: Trends in AI and Modern Data Center Architectures for AI Workloads CXL Composable Architectures: Improving Data Center Architecture using CXL and Acceleration Case Studies Beyond CXL: Optimizing AI Resource Connectivity in Data Center via Hybrid Link Architectures (CXL-over-XLink Supercluster) 1. Trends in AI and Modern Data Center Architectures for AI Workloads1 AI applications based on sequence models—such as chatbots, image generation, and video processing—are now widely integrated into everyday life. This technical report begins with an overview of sequence models, their underlying mechanisms, and the evolution from recurrent neural networks (RNNs) to large language models (LLMs). It then explains how current AI infrastructures handle these models and discusses their limitations. In particular, Panmnesia identifies two major challenges in modern AI infrastructures: (1) communication overhead during synchronization and (2) low resource utilization resulting from rigid, GPU-centric architectures. 2. CXL Composable Architectures: Improving Data Center Architecture Using CXL and Acceleration Case Studies2 To address the aforementioned challenges, Panmnesia proposes a solution built on CXL, an emerging interconnect technology. The report offers a thorough explanation of CXL's core concepts and features, emphasizing how it can minimize unnecessary communication through automatic cache coherence management and enables flexible resource expansion—ultimately addressing key challenges of conventional AI infrastructure. Panmnesia also introduces its CXL 3.0-compliant real-system prototype developed using its core technologies, including CXL IPs and CXL Switches. The report then shows how this prototype has been applied to accelerate real-world AI applications—such as RAG and deep learning recommendation models (DLRM)—demonstrating the practicality and effectiveness of CXL-based infrastructure. 3. Beyond CXL: Optimizing AI Resource Connectivity in Data Center via Hybrid Link Architectures (CXL-over-XLink Supercluster)3 This technical report is not limited to CXL alone. Panmnesia goes further by proposing methods to build more advanced AI infrastructure through the integration of diverse interconnect technologies alongside CXL. At the core of this approach is the CXL-over-XLink supercluster architecture, which uses CXL to enhance scalability, compatibility, and communication efficiency across clusters connected via accelerator-centric interconnects—collectively referred to as XLink—including UALink, NVLink, and NVLink Fusion. The report explains how the integration of these interconnect technologies enables an architecture that combines the advantages of each. It then concludes with a discussion on the practical application of emerging technologies such as HBM and silicon photonics. Conclusion With the release of this technical report, Panmnesia reinforces its leadership in next-generation interconnect technologies such as CXL and UALink. In parallel, the company continues to actively participate in various consortia related to AI infrastructure, including the CXL Consortium, UALink Consortium, PCI-SIG, and the Open Compute Project. Recently, Panmnesia also unveiled its 'link solution' product lineup, designed to realize its vision for next-generation AI infrastructure and further strengthen its brand identity. Dr. Myoungsoo Jung, CEO of Panmnesia, stated, 'We will continue to lead efforts to build better AI infrastructure by developing diverse link solutions and sharing our insights openly.' The full technical report on AI infrastructure is available on Panmnesia's website: 1 This corresponds to Sections 2 and 3 of the technical report. 2 This corresponds to Sections 4 and 5 of the technical report. 3 This corresponds to Section 6 of the technical report. Expand

Panmnesia Introduces Today's and Tomorrow's AI Infrastructure,
Panmnesia Introduces Today's and Tomorrow's AI Infrastructure,

Business Wire

time18-07-2025

  • Business
  • Business Wire

Panmnesia Introduces Today's and Tomorrow's AI Infrastructure,

BUSINESS WIRE)--Panmnesia has released a technical report titled 'Compute Can't Handle the Truth: Why Communication Tax Prioritizes Memory and Interconnects in Modern AI Infrastructure.' In this report, Panmnesia outlines the trends in modern AI models, the limitations of current AI infrastructure in handling them, and how emerging memory and interconnect technologies—including Compute Express Link (CXL), NVLink, Ultra Accelerator Link (UALink), and High Bandwidth Memory (HBM)—can be leveraged to improve AI infrastructure. Panmnesia aims to address the current challenges in AI infrastructure, by building flexible, scalable, and communication-efficient architecture using diverse interconnect technologies, instead of fixed GPU-based configurations. Panmnesia's CEO, Dr. Myoungsoo Jung, explained, 'This technical report was written to more clearly and accessibly share the ideas on AI infrastructure that we presented during a keynote last August. We aimed to explain AI and large language models (LLMs) in a way that even readers without deep technical backgrounds could understand. We also explored how AI infrastructure may evolve in the future, considering the unique characteristics of AI services.' He added, 'We hope this report proves helpful to those interested in the field.' Overview of the Technical Report Panmnesia's technical report is divided into three main parts: Trends in AI and Modern Data Center Architectures for AI Workloads CXL Composable Architectures: Improving Data Center Architecture using CXL and Acceleration Case Studies Beyond CXL: Optimizing AI Resource Connectivity in Data Center via Hybrid Link Architectures (CXL-over-XLink Supercluster) 1. Trends in AI and Modern Data Center Architectures for AI Workloads 1 AI applications based on sequence models—such as chatbots, image generation, and video processing—are now widely integrated into everyday life. This technical report begins with an overview of sequence models, their underlying mechanisms, and the evolution from recurrent neural networks (RNNs) to large language models (LLMs). It then explains how current AI infrastructures handle these models and discusses their limitations. In particular, Panmnesia identifies two major challenges in modern AI infrastructures: (1) communication overhead during synchronization and (2) low resource utilization resulting from rigid, GPU-centric architectures. 2. CXL Composable Architectures: Improving Data Center Architecture Using CXL and Acceleration Case Studies 2 To address the aforementioned challenges, Panmnesia proposes a solution built on CXL, an emerging interconnect technology. The report offers a thorough explanation of CXL's core concepts and features, emphasizing how it can minimize unnecessary communication through automatic cache coherence management and enables flexible resource expansion—ultimately addressing key challenges of conventional AI infrastructure. Panmnesia also introduces its CXL 3.0-compliant real-system prototype developed using its core technologies, including CXL IPs and CXL Switches. The report then shows how this prototype has been applied to accelerate real-world AI applications—such as RAG and deep learning recommendation models (DLRM)—demonstrating the practicality and effectiveness of CXL-based infrastructure. 3. Beyond CXL: Optimizing AI Resource Connectivity in Data Center via Hybrid Link Architectures (CXL-over-XLink Supercluster) 3 This technical report is not limited to CXL alone. Panmnesia goes further by proposing methods to build more advanced AI infrastructure through the integration of diverse interconnect technologies alongside CXL. At the core of this approach is the CXL-over-XLink supercluster architecture, which uses CXL to enhance scalability, compatibility, and communication efficiency across clusters connected via accelerator-centric interconnects—collectively referred to as XLink—including UALink, NVLink, and NVLink Fusion. The report explains how the integration of these interconnect technologies enables an architecture that combines the advantages of each. It then concludes with a discussion on the practical application of emerging technologies such as HBM and silicon photonics. Conclusion With the release of this technical report, Panmnesia reinforces its leadership in next-generation interconnect technologies such as CXL and UALink. In parallel, the company continues to actively participate in various consortia related to AI infrastructure, including the CXL Consortium, UALink Consortium, PCI-SIG, and the Open Compute Project. Recently, Panmnesia also unveiled its 'link solution' product lineup, designed to realize its vision for next-generation AI infrastructure and further strengthen its brand identity. Dr. Myoungsoo Jung, CEO of Panmnesia, stated, 'We will continue to lead efforts to build better AI infrastructure by developing diverse link solutions and sharing our insights openly.' The full technical report on AI infrastructure is available on Panmnesia's website:

Marvell Unveils UALink to Boost AI Accelerator Connectivity for 5G Growth
Marvell Unveils UALink to Boost AI Accelerator Connectivity for 5G Growth

Yahoo

time24-06-2025

  • Business
  • Yahoo

Marvell Unveils UALink to Boost AI Accelerator Connectivity for 5G Growth

Marvell Technology, Inc. (NASDAQ:MRVL) is on our list of the 10 best 5G stocks to invest in according to analysts. On June 11, Marvell Technology, Inc. (NASDAQ:MRVL) unveiled its unique Ultra Accelerator Link (UALink) scale-up solution, which is intended to improve the interconnectivity of AI accelerators and switches. The company claims that in a single deployment, the customized UALink solution allows scale-up interconnects for hundreds or thousands of AI accelerators. The announcement comes as growing AI infrastructure while preserving performance becomes a bigger challenge for hyperscalers. Marvell claims that by using an open-standards-based approach that permits direct, low-latency communication between accelerators, its proprietary UALink product solves these issues. Forrest Norrod, an AMD executive, also endorsed the development, stating: 'We are excited to see UALink custom solutions from Marvell, which are essential to the future of AI.' Marvell Technology, Inc. (NASDAQ:MRVL) has made a name for itself as a leader in the creation of data processing units (DPUs). In 2021, Marvell Technology, Inc. (NASDAQ:MRVL) made major acquisitions, including Inphi and Innovium, in an attempt to increase its global footprint in important industries including 5G, cloud computing, and other business solutions. While we acknowledge the potential of MRVL as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. Read More: and Disclosure: None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Marvell's AI Bet: Will NVLink and UALink Drive Custom Chip Wins?
Marvell's AI Bet: Will NVLink and UALink Drive Custom Chip Wins?

Yahoo

time20-06-2025

  • Business
  • Yahoo

Marvell's AI Bet: Will NVLink and UALink Drive Custom Chip Wins?

Marvell Technology MRVL is enhancing its role in artificial intelligence (AI) infrastructure by expanding its custom chip capabilities. Marvell continues to integrate its custom compute platform with new components that improve performance, scalability, and integration across large-scale the first quarter of fiscal 2026, Marvell reported record Data Center revenues of $1.44 billion, up 76% year over year. The growth was driven by the rapid scaling of custom AI silicon. To support continued momentum, Marvell recently announced multiple strategic additions to its custom silicon May 2025, Marvell partnered with NVIDIA to offer NVIDIA's NVLink Fusion technology to customers deploying Marvell's custom cloud platform silicon. This enables custom XPUs to connect with NVIDIA's rack-scale hardware architecture. Marvell noted that its custom silicon with NVIDIA NVLink Fusion offers its customers greater flexibility and options in developing next-generation AI infrastructure. This announcement reflects that MRVL's custom chips are gaining credibility and traction, even among companies like the same month, Marvell introduced its new multi-die packaging solution, which is built on its proprietary interposer technology. The solution is already in production for a customer-specific XPU program. The platform enables more efficient die-to-die interconnect, lowers power consumption, enhances yield and lowers product this month, Marvell introduced a third addition to its custom platform — Ultra Accelerator Link (UALink) scale-up solution. The solution delivers an open-standards-based scale-up interconnect platform with high compute utilization and low latency. UALink is paired with Marvell custom silicon capabilities. This allows compute vendors to build solutions, including custom accelerators with UALink controllers and custom switches, enabling optimal performance for rack-scale these additions support Marvell's push to enable full rack-level custom infrastructure. Moreover, with new components entering production, Marvell is positioned to play a crucial role in powering the next generation of large-scale AI systems. Advanced Micro Devices AMD is advancing its rack-level AI solutions through its acquisition of ZT Systems. This acquisition enables Advanced Micro Devices to reduce deployment time for hyperscalers by combining AMD's CPUs, GPUs, and networking components. This move also enables Advanced Micro Devices to accelerate time to market for its OEM and ODM AVGO is aggressively scaling its AI networking portfolio. In the second quarter of fiscal 2025, AVGO's AI networking revenues jumped 170% year over year and now comprise 40% of its total AI semiconductor revenues. Broadcom also introduced the Tomahawk 6 switch with a 102.4 Terabits per second switch capacity. It is designed to enable AI clusters of over 100,000 AI accelerators to be deployed in 2 tiers. This move enables Broadcom to achieve better performance in training its next-generation frontier models through lower latency, higher bandwidth and lower power. Shares of Marvell Technology have plunged 31.9% year to date against the Electronics - Semiconductors industry's growth of 6.4%. Image Source: Zacks Investment Research From a valuation standpoint, Marvell Technology trades at a forward price-to-sales ratio of 7.36X, lower than the industry's average of 8.15X. Image Source: Zacks Investment Research The Zacks Consensus Estimate for MRVL's fiscal 2026 and fiscal 2027 earnings implies year-over-year growth of 77.71% and 27.73%, respectively. The earnings estimates for fiscal 2026 and fiscal 2027 have been revised upward in the past 30 days and seven days, respectively. Image Source: Zacks Investment Research MRVL currently carries a Zacks Rank #3 (Hold). You can see the complete list of today's Zacks #1 Rank (Strong Buy) stocks here. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Advanced Micro Devices, Inc. (AMD) : Free Stock Analysis Report Marvell Technology, Inc. (MRVL) : Free Stock Analysis Report Broadcom Inc. (AVGO) : Free Stock Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store