Latest news with #CXL


Business Wire
18-07-2025
- Business
- Business Wire
Panmnesia Introduces Today's and Tomorrow's AI Infrastructure, Including a Supercluster Architecture That Integrates NVLink, UALink, and HBM via CXL
DAEJEON, South Korea--(BUSINESS WIRE)--Panmnesia has released a technical report titled 'Compute Can't Handle the Truth: Why Communication Tax Prioritizes Memory and Interconnects in Modern AI Infrastructure.' In this report, Panmnesia outlines the trends in modern AI models, the limitations of current AI infrastructure in handling them, and how emerging memory and interconnect technologies—including Compute Express Link (CXL), NVLink, Ultra Accelerator Link (UALink), and High Bandwidth Memory (HBM)—can be leveraged to improve AI infrastructure. Panmnesia aims to address the current challenges in AI infrastructure, by building flexible, scalable, and communication-efficient architecture using diverse interconnect technologies, instead of fixed GPU-based configurations. Panmnesia's CEO, Dr. Myoungsoo Jung, explained, 'This technical report was written to more clearly and accessibly share the ideas on AI infrastructure that we presented during a keynote last August. We aimed to explain AI and large language models (LLMs) in a way that even readers without deep technical backgrounds could understand. We also explored how AI infrastructure may evolve in the future, considering the unique characteristics of AI services.' He added, 'We hope this report proves helpful to those interested in the field.' Overview of the Technical Report Panmnesia's technical report is divided into three main parts: Trends in AI and Modern Data Center Architectures for AI Workloads CXL Composable Architectures: Improving Data Center Architecture using CXL and Acceleration Case Studies Beyond CXL: Optimizing AI Resource Connectivity in Data Center via Hybrid Link Architectures (CXL-over-XLink Supercluster) 1. Trends in AI and Modern Data Center Architectures for AI Workloads1 AI applications based on sequence models—such as chatbots, image generation, and video processing—are now widely integrated into everyday life. This technical report begins with an overview of sequence models, their underlying mechanisms, and the evolution from recurrent neural networks (RNNs) to large language models (LLMs). It then explains how current AI infrastructures handle these models and discusses their limitations. In particular, Panmnesia identifies two major challenges in modern AI infrastructures: (1) communication overhead during synchronization and (2) low resource utilization resulting from rigid, GPU-centric architectures. 2. CXL Composable Architectures: Improving Data Center Architecture Using CXL and Acceleration Case Studies2 To address the aforementioned challenges, Panmnesia proposes a solution built on CXL, an emerging interconnect technology. The report offers a thorough explanation of CXL's core concepts and features, emphasizing how it can minimize unnecessary communication through automatic cache coherence management and enables flexible resource expansion—ultimately addressing key challenges of conventional AI infrastructure. Panmnesia also introduces its CXL 3.0-compliant real-system prototype developed using its core technologies, including CXL IPs and CXL Switches. The report then shows how this prototype has been applied to accelerate real-world AI applications—such as RAG and deep learning recommendation models (DLRM)—demonstrating the practicality and effectiveness of CXL-based infrastructure. 3. Beyond CXL: Optimizing AI Resource Connectivity in Data Center via Hybrid Link Architectures (CXL-over-XLink Supercluster)3 This technical report is not limited to CXL alone. Panmnesia goes further by proposing methods to build more advanced AI infrastructure through the integration of diverse interconnect technologies alongside CXL. At the core of this approach is the CXL-over-XLink supercluster architecture, which uses CXL to enhance scalability, compatibility, and communication efficiency across clusters connected via accelerator-centric interconnects—collectively referred to as XLink—including UALink, NVLink, and NVLink Fusion. The report explains how the integration of these interconnect technologies enables an architecture that combines the advantages of each. It then concludes with a discussion on the practical application of emerging technologies such as HBM and silicon photonics. Conclusion With the release of this technical report, Panmnesia reinforces its leadership in next-generation interconnect technologies such as CXL and UALink. In parallel, the company continues to actively participate in various consortia related to AI infrastructure, including the CXL Consortium, UALink Consortium, PCI-SIG, and the Open Compute Project. Recently, Panmnesia also unveiled its 'link solution' product lineup, designed to realize its vision for next-generation AI infrastructure and further strengthen its brand identity. Dr. Myoungsoo Jung, CEO of Panmnesia, stated, 'We will continue to lead efforts to build better AI infrastructure by developing diverse link solutions and sharing our insights openly.' The full technical report on AI infrastructure is available on Panmnesia's website: 1 This corresponds to Sections 2 and 3 of the technical report. 2 This corresponds to Sections 4 and 5 of the technical report. 3 This corresponds to Section 6 of the technical report. Expand


Business Wire
18-07-2025
- Business
- Business Wire
Panmnesia Introduces Today's and Tomorrow's AI Infrastructure,
BUSINESS WIRE)--Panmnesia has released a technical report titled 'Compute Can't Handle the Truth: Why Communication Tax Prioritizes Memory and Interconnects in Modern AI Infrastructure.' In this report, Panmnesia outlines the trends in modern AI models, the limitations of current AI infrastructure in handling them, and how emerging memory and interconnect technologies—including Compute Express Link (CXL), NVLink, Ultra Accelerator Link (UALink), and High Bandwidth Memory (HBM)—can be leveraged to improve AI infrastructure. Panmnesia aims to address the current challenges in AI infrastructure, by building flexible, scalable, and communication-efficient architecture using diverse interconnect technologies, instead of fixed GPU-based configurations. Panmnesia's CEO, Dr. Myoungsoo Jung, explained, 'This technical report was written to more clearly and accessibly share the ideas on AI infrastructure that we presented during a keynote last August. We aimed to explain AI and large language models (LLMs) in a way that even readers without deep technical backgrounds could understand. We also explored how AI infrastructure may evolve in the future, considering the unique characteristics of AI services.' He added, 'We hope this report proves helpful to those interested in the field.' Overview of the Technical Report Panmnesia's technical report is divided into three main parts: Trends in AI and Modern Data Center Architectures for AI Workloads CXL Composable Architectures: Improving Data Center Architecture using CXL and Acceleration Case Studies Beyond CXL: Optimizing AI Resource Connectivity in Data Center via Hybrid Link Architectures (CXL-over-XLink Supercluster) 1. Trends in AI and Modern Data Center Architectures for AI Workloads 1 AI applications based on sequence models—such as chatbots, image generation, and video processing—are now widely integrated into everyday life. This technical report begins with an overview of sequence models, their underlying mechanisms, and the evolution from recurrent neural networks (RNNs) to large language models (LLMs). It then explains how current AI infrastructures handle these models and discusses their limitations. In particular, Panmnesia identifies two major challenges in modern AI infrastructures: (1) communication overhead during synchronization and (2) low resource utilization resulting from rigid, GPU-centric architectures. 2. CXL Composable Architectures: Improving Data Center Architecture Using CXL and Acceleration Case Studies 2 To address the aforementioned challenges, Panmnesia proposes a solution built on CXL, an emerging interconnect technology. The report offers a thorough explanation of CXL's core concepts and features, emphasizing how it can minimize unnecessary communication through automatic cache coherence management and enables flexible resource expansion—ultimately addressing key challenges of conventional AI infrastructure. Panmnesia also introduces its CXL 3.0-compliant real-system prototype developed using its core technologies, including CXL IPs and CXL Switches. The report then shows how this prototype has been applied to accelerate real-world AI applications—such as RAG and deep learning recommendation models (DLRM)—demonstrating the practicality and effectiveness of CXL-based infrastructure. 3. Beyond CXL: Optimizing AI Resource Connectivity in Data Center via Hybrid Link Architectures (CXL-over-XLink Supercluster) 3 This technical report is not limited to CXL alone. Panmnesia goes further by proposing methods to build more advanced AI infrastructure through the integration of diverse interconnect technologies alongside CXL. At the core of this approach is the CXL-over-XLink supercluster architecture, which uses CXL to enhance scalability, compatibility, and communication efficiency across clusters connected via accelerator-centric interconnects—collectively referred to as XLink—including UALink, NVLink, and NVLink Fusion. The report explains how the integration of these interconnect technologies enables an architecture that combines the advantages of each. It then concludes with a discussion on the practical application of emerging technologies such as HBM and silicon photonics. Conclusion With the release of this technical report, Panmnesia reinforces its leadership in next-generation interconnect technologies such as CXL and UALink. In parallel, the company continues to actively participate in various consortia related to AI infrastructure, including the CXL Consortium, UALink Consortium, PCI-SIG, and the Open Compute Project. Recently, Panmnesia also unveiled its 'link solution' product lineup, designed to realize its vision for next-generation AI infrastructure and further strengthen its brand identity. Dr. Myoungsoo Jung, CEO of Panmnesia, stated, 'We will continue to lead efforts to build better AI infrastructure by developing diverse link solutions and sharing our insights openly.' The full technical report on AI infrastructure is available on Panmnesia's website:
Yahoo
27-06-2025
- Business
- Yahoo
Astera Labs or Broadcom: Which Stock Leads in AI Infrastructure Now?
In the high-speed connectivity semiconductor market, Astera Labs ALAB and Broadcom AVGO are two names rapidly gaining prominence, particularly in PCIe and CXL retimers that power AI and cloud infrastructure. As AI workloads continue to increase, both companies are well-positioned to benefit. So, which connectivity chip stock is the better buy right now? Let's find out. First-to-Market Advantage in PCIe 6 and Strategic Product Ramp in 2025: Astera Labs has established a first-mover edge in PCIe 6.0 connectivity, with Aries 6 retimers and smart gearboxes already shipping and Scorpio switches likely in volume production as projected earlier. Leo CXL controllers are also in pre-production sampling with hyperscalers. These products are qualified for next-generation GPU-based AI racks and management expects volume ramps in the second half of 2025, signaling a key inflection point in silicon content per rack. Rack-Scale Strategy and UALink Create Long-Term Growth Tailwinds: Astera Labs is expanding its TAM through a rack-scale connectivity strategy that combines high-value hardware with differentiated software. Its Scorpio X- and P-series switches and COSMOS suite enable deeper system integration, telemetry, and fleet-level optimization, driving higher ASPs and software-based lock-in. UALink 1.0, an open, high-speed interconnect, marks a structural shift. Astera plans to sample UALink-compatible products in 2026, targeting a multibillion-dollar opportunity by 2029 that aligns with its portfolio and leadership in AI infrastructure. Strong Operating Leverage Despite R&D Investment Surge: With Astera aggressively ramping up investment, the first-quarter adjusted R&D spending rose 14.2% year over year. The company delivered an adjusted operating margin of 33.7%, demonstrating tight cost control and clear operating leverage on higher product volumes. Furthermore, gross margin was 74.9%, reflecting stable ASPs and favorable mix, even amid preproduction shipments. With $925 million in cash and positive operating cash flow, Astera is also capitalized to pursue sustained innovation without near-term dilution risk. AI Networking Momentum With Ethernet-Based Portfolio: AI networking revenues grew more than 170% year over year and now account for 40% of Broadcom's AI-related revenues. The company's Ethernet-first approach, leveraging Tomahawk switches, Jericho routers and NICs, has made it the preferred vendor for scale-out and scale-up AI clusters among hyperscalers. The newly launched Tomahawk 6, boasting 102.4 Tbps switching capacity, allows hyperscalers to flatten AI networks into just two tiers, reducing latency, power and cost. XPU Custom Silicon Gaining Critical Mass: Broadcom is scaling up its custom AI accelerators (XPUs) for at least three major hyperscaler customers, each expected to deploy around 1 million clusters by 2027. With rising demand for inference, some deployments may begin as early as late 2026. This growing XPU business strengthens customer relationships and adds a high-margin revenue stream. High-Margin Software Growth and Strong Capital Returns: Broadcom's infrastructure software revenues grew 25% year over year in the second quarter of fiscal 2025, fueled by strong adoption of its voltage-controlled filter (VCF) platform post-VMware acquisition, now used by 87% of its top 10,000 customers. This high-margin, recurring software business complements its chip segment and supports private cloud and AI workloads. Broadcom also delivered $10 billion in EBITDA (67% margin), $6.4 billion in free cash flow and returned $7 billion to shareholders, highlighting its capital-efficient model and strong value creation. Image Source: Zacks Investment Research Over the past three months, Astera Labs and Broadcom shares have rallied 61.2%% and 60.1% respectively, significantly outpacing the broader sector's 18.6% gain and the S&P 500's 9.5% rise. Based on short-term price targets offered by 14 analysts, ALAB's average price target represents an increase of 10.4% from the last closing price of $89.63. Image Source: Zacks Investment Research Based on short-term price targets offered by 30 analysts, AVGO's average price target represents an improvement of 10.8% from the last closing price of $264.65. Image Source: Zacks Investment Research While both Astera Labs and Broadcom are well-positioned to ride the AI infrastructure wave, ALAB presents a more focused trajectory. With a first-mover lead in PCIe 6.0 and CXL connectivity, likely volume ramp-ups in the second quarter of 2025, and early bets on UALink, Astera is set for an inflection in revenues and margin expansion. In contrast, Broadcom's non-AI chip segments remain a headwind, with its fiscal second-quarter non-AI semiconductor revenues down 5% year over year and expected to stay flat. Given ALAB's Zacks Rank #2 (Buy) versus AVGO's Zacks Rank #3 (Hold), Astera Lab currently stands out as a relatively stronger player for investors seeking targeted exposure to next-gen AI connectivity. You can see the complete list of today's Zacks #1 Rank (Strong Buy) stocks here. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Broadcom Inc. (AVGO) : Free Stock Analysis Report Astera Labs, Inc. (ALAB) : Free Stock Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data


Globe and Mail
27-06-2025
- Business
- Globe and Mail
Astera Labs or Broadcom: Which Stock Leads in AI Infrastructure Now?
In the high-speed connectivity semiconductor market, Astera Labs ALAB and Broadcom AVGO are two names rapidly gaining prominence, particularly in PCIe and CXL retimers that power AI and cloud infrastructure. As AI workloads continue to increase, both companies are well-positioned to benefit. So, which connectivity chip stock is the better buy right now? Let's find out. Why Astera Labs is a Compelling Buy Right Now First-to-Market Advantage in PCIe 6 and Strategic Product Ramp in 2025: Astera Labs has established a first-mover edge in PCIe 6.0 connectivity, with Aries 6 retimers and smart gearboxes already shipping and Scorpio switches likely in volume production as projected earlier. Leo CXL controllers are also in pre-production sampling with hyperscalers. These products are qualified for next-generation GPU-based AI racks and management expects volume ramps in the second half of 2025, signaling a key inflection point in silicon content per rack. Rack-Scale Strategy and UALink Create Long-Term Growth Tailwinds: Astera Labs is expanding its TAM through a rack-scale connectivity strategy that combines high-value hardware with differentiated software. Its Scorpio X- and P-series switches and COSMOS suite enable deeper system integration, telemetry, and fleet-level optimization, driving higher ASPs and software-based lock-in. UALink 1.0, an open, high-speed interconnect, marks a structural shift. Astera plans to sample UALink-compatible products in 2026, targeting a multibillion-dollar opportunity by 2029 that aligns with its portfolio and leadership in AI infrastructure. Strong Operating Leverage Despite R&D Investment Surge: With Astera aggressively ramping up investment, the first-quarter adjusted R&D spending rose 14.2% year over year. The company delivered an adjusted operating margin of 33.7%, demonstrating tight cost control and clear operating leverage on higher product volumes. Furthermore, gross margin was 74.9%, reflecting stable ASPs and favorable mix, even amid preproduction shipments. With $925 million in cash and positive operating cash flow, Astera is also capitalized to pursue sustained innovation without near-term dilution risk. Broadcom's Key Progress in the Semiconductor Space AI Networking Momentum With Ethernet-Based Portfolio: AI networking revenues grew more than 170% year over year and now account for 40% of Broadcom's AI-related revenues. The company's Ethernet-first approach, leveraging Tomahawk switches, Jericho routers and NICs, has made it the preferred vendor for scale-out and scale-up AI clusters among hyperscalers. The newly launched Tomahawk 6, boasting 102.4 Tbps switching capacity, allows hyperscalers to flatten AI networks into just two tiers, reducing latency, power and cost. XPU Custom Silicon Gaining Critical Mass: Broadcom is scaling up its custom AI accelerators (XPUs) for at least three major hyperscaler customers, each expected to deploy around 1 million clusters by 2027. With rising demand for inference, some deployments may begin as early as late 2026. This growing XPU business strengthens customer relationships and adds a high-margin revenue stream. High-Margin Software Growth and Strong Capital Returns: Broadcom's infrastructure software revenues grew 25% year over year in the second quarter of fiscal 2025, fueled by strong adoption of its voltage-controlled filter (VCF) platform post-VMware acquisition, now used by 87% of its top 10,000 customers. This high-margin, recurring software business complements its chip segment and supports private cloud and AI workloads. Broadcom also delivered $10 billion in EBITDA (67% margin), $6.4 billion in free cash flow and returned $7 billion to shareholders, highlighting its capital-efficient model and strong value creation. ALAB, AVGO Outperform Sector and Benchmark in Three Months Over the past three months, Astera Labs and Broadcom shares have rallied 61.2%% and 60.1% respectively, significantly outpacing the broader sector 's 18.6% gain and the S&P 500's 9.5% rise. Average Target Price for ALAB and AVGO Suggest Upsides Based on short-term price targets offered by 14 analysts, ALAB's average price target represents an increase of 10.4% from the last closing price of $89.63. Based on short-term price targets offered by 30 analysts, AVGO's average price target represents an improvement of 10.8% from the last closing price of $264.65. Buy ALAB Now While both Astera Labs and Broadcom are well-positioned to ride the AI infrastructure wave, ALAB presents a more focused trajectory. With a first-mover lead in PCIe 6.0 and CXL connectivity, likely volume ramp-ups in the second quarter of 2025, and early bets on UALink, Astera is set for an inflection in revenues and margin expansion. In contrast, Broadcom's non-AI chip segments remain a headwind, with its fiscal second-quarter non-AI semiconductor revenues down 5% year over year and expected to stay flat. Given ALAB's Zacks Rank #2 (Buy) versus AVGO's Zacks Rank #3 (Hold), Astera Lab currently stands out as a relatively stronger player for investors seeking targeted exposure to next-gen AI connectivity. You can see the complete list of today's Zacks #1 Rank (Strong Buy) stocks here. Zacks' Research Chief Picks Stock Most Likely to "At Least Double" Our experts have revealed their Top 5 recommendations with money-doubling potential – and Director of Research Sheraz Mian believes one is superior to the others. Of course, all our picks aren't winners but this one could far surpass earlier recommendations like Hims & Hers Health, which shot up +209%. See Our Top Stock to Double (Plus 4 Runners Up) >> Broadcom Inc. (AVGO): Free Stock Analysis Report Astera Labs, Inc. (ALAB): Free Stock Analysis Report
Yahoo
24-06-2025
- Business
- Yahoo
Primemas Announces Customer Samples Milestone of World's First CXL 3.0 SoC
Working with Micron and their CXL AVL program to accelerate commercialization of next-generation memory solutions for data centers and AI infrastructure SANTA CLARA, Calif., and SEOUL, South Korea, June 24, 2025--(BUSINESS WIRE)--Primemas Inc., a fabless semiconductor company specializing in chiplet-based SoC solutions through its Hublet® architecture, today announced the availability of customer samples of the world's first Compute Express Link (CXL) memory 3.0 controller. Primemas has been delivering engineering samples and development boards to select strategic customers and partners, who have played a key role in validating the performance and capabilities of Hublet® compared to alternative CXL controllers. Building on this successful early engagement, Primemas is now pleased to announce that Hublet® product samples are ready for shipment to memory vendors, customers, and ecosystem partners. While conventional CXL memory expansion controllers are limited by fixed form factors and capped DRAM capacities, Primemas leverages cutting-edge chiplet technology to deliver unmatched scalability and modularity. At the core of this innovation is the Hublet®—a versatile building block that enables a wide variety of configurations. Primemas customers are finding innovative ways to leverage the modularity: A 1x1 single Hublet® delivers compact E3.S products supporting up to 512GB of DRAM; A 2x2 Hublet® can support a PCIe Add-in-card or CEM products with up to 2TB of DRAM, and For hyperscale environments, a 4x4 Hublet® powers a 1U rack memory appliance capable of an impressive 8TB of DRAM. "We are very encouraged by the excellent feedback from our initial partners, who leveraged Hublet® to address the challenges posed by rapidly growing workloads," said Jay Kim, EVP and Head of Business Development at Primemas. "We're excited to take the next major step toward commercialization through our collaboration with Micron and their CXL AVL program." The CXL ASIC Validation Lab (AVL) program was established by Micron to help bring next-generation CXL controllers to market and achieve maximum reliability and compatibility with its advanced DRAM modules. There are numerous challenges to delivering stable, reliable memory read and write operations while optimizing performance and power efficiency in CXL controllers. Through this joint effort, the two companies aim to provide a high-quality, reliable and the world's first CXL 3.0 controller along with the latest high-capacity 128GB RDIMM modules. "With the rapid adoption of AI, and the corresponding increase in memory-intensive workloads, CXL-based solutions are driving innovations to transform traditional compute platforms," said Luis Ancajas, director of CXL Business Development at Micron. "As an industry leader in data center memory solutions, we are excited to collaborate with innovators like Primemas to validate and accelerate next-generation solutions like the Hublet® SoC through our AVL program and help bring these transformative solutions to market to unlock new levels of performance, scalability and efficiency for the data center." This joint effort demonstrates the shared commitment of Primemas and Micron to innovation and quality in the semiconductor industry and further strengthens Primemas' position as a leader in scalable, high-performance chiplet-based SoC solutions for CXL, AI, and data analytics applications. About Primemas Primemas is a fabless semiconductor company delivering pre-built SoC hub chiplets (Hublet®) to streamline development and manufacturing—reducing the cost and time associated with custom design and production. The Hublet® platform provides scalable I/O, control, and compute functionality, supporting markets such as CXL, AI, and data analytics. Primemas is headquartered in Santa Clara, California, with an R&D center in Seoul, South Korea. To learn more about Primemas, visit View source version on Contacts Press Contact: press@ Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data