ALAB Stock Shines as AI Infrastructure Ties With NVIDIA Deepen
Astera Labs ALAB is gaining market as a critical enabler of next-generation AI and cloud infrastructure, driven by its expanding portfolio of high-performance connectivity solutions. In the first quarter of 2025, revenues surged 144% year over year, banking on strong demand across its Aries, Taurus, Leo and Scorpio product lines.
Historically known for its Aries PCIe retimers supporting NVIDIA AI servers, Astera Labs has rapidly expanded its portfolio to now encompass full-rack solutions, including Scorpio Fabric Switches, Aries 6 Retimers and Smart Gearboxes, Taurus Ethernet modules and Leo CXL controllers designed to address both scale-up (intra-server, accelerator cluster) and scale-out (inter-server, data center-wide) connectivity challenges. The company's COSMOS software suite further enhances this hardware synergy, enabling advanced diagnostics, fleet observability and performance optimization.
Additionally, the company has expanded its strategic collaboration with NVIDIA NVDA to support the NVLink Fusion ecosystem for Blackwell-based MGX platforms and has stepped up as a promoter member and board participant in the UALink Consortium, helping to advance open, high-performance interconnect standards for accelerator-rich AI clusters.
AMD and QCOM — Two Other Prominent Players Working in This Niche
Advanced Micro Devices AMD: The company's expanding portfolio of EPYC CPUs and Instinct GPUs is aligned with the PCIe 6.0 standard, which is becoming foundational in next-generation AI infrastructure. As data center and AI rack architectures migrate toward PCIe 6.0 to meet rising bandwidth and latency demands, AMD's adoption of this standard is indirectly fueling demand for high-speed interconnect solutions like those offered by Astera Labs.
Qualcomm QCOM: Qualcomm is making a bold move into the AI data center connectivity space with its recent $2.4 billion acquisition of Alphawave, a company specializing in high-speed connectivity IP. This strategic acquisition signals Qualcomm's intention to become a more prominent player in AI infrastructure, a domain where Astera Labs is already deeply entrenched. Additionally, Qualcomm is now partnering with NVIDIA in the NVLink Fusion initiative, placing it in direct competition with companies like Astera Labs that are focused on enabling high-bandwidth, low-latency interconnects for AI racks.
ALAB's Price Performance and Valuation
Astera Labs has rallied 32.9% in the past three months compared with the industry's 15.8% growth and the sector's 12.3% rise. The S&P 500 index, meanwhile, has improved 7.9% during the said period.
Share Price Comparison: ALAB
Image Source: Zacks Investment Research
Astera Labs is presently trading at a forward 12-month price-to-sales of 19.02X, which is below its 1-year median of 19.95X. However, it remains overvalued compared to the industry.
Image Source: Zacks Investment Research
ALAB currently carries a Zacks Rank #2 (Buy).
You can see the complete list of today's Zacks #1 Rank (Strong Buy) stocks here.
Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report
QUALCOMM Incorporated (QCOM) : Free Stock Analysis Report
Advanced Micro Devices, Inc. (AMD) : Free Stock Analysis Report
NVIDIA Corporation (NVDA) : Free Stock Analysis Report
Astera Labs, Inc. (ALAB) : Free Stock Analysis Report
This article originally published on Zacks Investment Research (zacks.com).
Zacks Investment Research

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Yahoo
an hour ago
- Yahoo
AMD gains on Nvidia? Lisa Su reveals new chips in heated AI inference race
-- Advanced Micro Devices Inc (NASDAQ:AMD) made an aggressive bid for dominance in AI inference at its Advancing AI event Thursday, unveiling new chips that directly challenge NVIDIA Corporation's (NASDAQ:NVDA) supremacy in the data center GPU market. AMD claims its latest Instinct MI355X accelerators surpass Nvidia's most advanced Blackwell GPUs in inference performance while offering a significant cost advantage, a critical selling point as hyperscalers look to scale generative AI services affordably. The MI355X, which has just begun volume shipments, delivers a 35-fold generational leap in inference performance and, according to AMD, up to 40% more tokens-per-dollar compared to Nvidia's flagship chips. That performance boost, coupled with lower power consumption, is designed to help AMD undercut Nvidia's offerings in total cost of ownership at a time when major AI customers are re-evaluating procurement strategies. 'What has really changed is the demand for inference has grown significantly,' AMD CEO Lisa Su said at the event in San Jose. 'It says that we have really strong hardware, which we always knew, but it also shows that the open software frameworks have made tremendous progress.' AMD's argument hinges not just on silicon performance, but on architecture and economics. By pairing its GPUs with its own CPUs and networking chips inside open 'rack-scale' systems, branded Helios, AMD is building full-stack solutions to rival Nvidia's proprietary end-to-end ecosystem. These systems, launching next year with the MI400 series, were designed to enable hyperscale inference clusters while reducing energy and infrastructure costs. Su highlighted how companies like OpenAI, Meta Platforms Inc (NASDAQ:META), and Microsoft Corporation (NASDAQ:MSFT) are now running inference workloads on AMD chips, with OpenAI CEO Sam Altman confirming a close partnership on infrastructure innovation. 'It's gonna be an amazing thing,' Altman said during the event. 'When you first started telling me about the specs, I was like, there's no way, that just sounds totally crazy.' Oracle Corporation (NYSE:ORCL) Cloud Infrastructure intends to offer massive clusters of AMD chips, with plans to deploy up to 131,072 MI355X GPUs, positioning AMD as a scalable alternative to Nvidia's tightly integrated, and often more expensive, solutions. AMD officials emphasized the cost benefits, asserting that customers could achieve double-digit percent savings on power and capital expenditures when compared with Nvidia's GPUs. Despite the positive news, AMD shares were down roughly 2% ahead of market close. Wall Street remains cautious, but AMD's moves suggest it is committed to challenging Nvidia's leadership not only with performance parity, but also with a differentiated value and systems strategy. While Nvidia still commands more than 90% of the data center AI chip market, AMD's targeted push into inference, where workloads demand high efficiency and lower costs, marks a strategic front in the battle for AI dominance. With generative AI models driving a surge in inference demand across enterprises, AMD is betting that performance per dollar will matter more than ever. Related articles AMD gains on Nvidia? Lisa Su reveals new chips in heated AI inference race Wall Street weighs in on Boeing after digesting deadly Air India crash SharpLink Gaming stock tumbles after filing to offer common stock, warrants


Associated Press
an hour ago
- Associated Press
AMD Surpasses 30x25 Goal, Sets Ambitious New 20x Efficiency Target
At a Glance: At AMD, energy efficiency has long been a guiding core design principle aligned to our roadmap and product strategy. For more than a decade, we've set public, time-bound goals to dramatically increase the energy efficiency of our products and have consistently met and exceeded those targets. Today, I'm proud to share that we've done it again, and we're setting the next five-year vision for energy efficient design. Today at Advancing AI, we announced that AMD has surpassed our 30x25 goal, which we set in 2021 to improve the energy efficiency of AI-training and high-performance computing (HPC) nodes by 30x from 2020 to 2025.1 This was an ambitious goal, and we're proud to have exceeded it, but we're not stopping here. As AI continues to scale, and as we move toward true end-to-end design of full AI systems, it's more important than ever for us to continue our leadership in energy- efficient design work. That's why today, we're also setting our sights on a bold new target: a 20x improvement in rack-scale energy efficiency for AI training and inference by 2030, from a 2024 base year.2 Building on a Decade of Leadership This marks the third major milestone in a multi-decade effort to advance efficiency across our computing platforms. In 2020, we exceeded our 25x20 goal by improving the energy efficiency of AMD mobile processors 25-fold in just six years.3 The 30x25 goal built on that momentum, targeting AI and HPC workloads in accelerated nodes. And now, the 20x by 2030 rack-scale goal reflects the next frontier, not just focused on chips, but smarter and more efficient systems, from silicon to full rack integration to address data center level power requirements. Surpassing 30x25 Our 30x25 goal was rooted in a clear benchmark, to improve the energy efficiency of our accelerated compute nodes by 30x compared to a 2020 base year. This goal represented more than a 2.5x acceleration over industry trends from the previous five years (2015-2020). As of mid-2025, we've gone beyond that, achieving a 38x gain over the base system using a current configuration of four AMD Instinct™ MI355X GPUs and one AMD EPYC™ 5th Gen CPU.4 That equates to a 97% reduction in energy for the same performance compared to systems from just five years ago. We achieved this through deep architectural innovations, aggressive optimization of performance-per-watt, and relentless engineering across our CPU and GPU product lines. A New Goal for the AI Era As workloads scale and demand continues to rise, node-level efficiency gains won't keep pace. The most significant efficiency impact can be realized at the system level, where our 2030 goal is focused. We believe we can achieve 20x increase in rack-scale energy efficiency for AI training and inference from 2024 by 2030, which AMD estimates exceeds the industry improvement trend from 2018 to 2025 by almost 3x. This reflects performance-per-watt improvements across the entire rack, including CPUs, GPUs, memory, networking, storage and hardware-software co-design, based on our latest designs and roadmap projections. This shift from node to rack is made possible by our rapidly evolving end-to-end AI strategy and is key to scaling datacenter AI in a more sustainable way. What This Means in Practice A 20x rack-scale efficiency improvement at nearly 3x the prior industry rate has major implications. Using training for a typical AI model in 2025 as a benchmark, the gains could enable:5 These projections are based on AMD silicon and system design roadmap and a measurement methodology validated by energy-efficiency expert Dr. Jonathan Koomey. 'By grounding the 2030 target in system-level metrics and transparent methodology, AMD is raising the bar for the industry,' Dr. Koomey said. 'The target gains in rack-scale efficiency will enable others across the ecosystem, from model developers to cloud providers, to scale AI compute more sustainably and cost-effectively.' Looking Beyond Hardware Our 20x goal reflects what we control directly: hardware and system-level design. But we know that even greater delivered AI model efficiency gains will be possible, of up to 5x over the goal period, as software developers discover smarter algorithms and continue innovating with lower-precision approaches at current rates. When those factors are included, overall energy efficiency for training a typical AI model could improve by as much as 100x by 2030.6 While AMD is not claiming that full multiplier in our own goal, we're proud to provide the hardware foundation that enables it — and to support the open ecosystem and developer community working to unlock those gains. Whether through open standards, our open software approach with AMD ROCm™, or our close collaboration with our partners, AMD remains committed to helping innovators everywhere scale AI more efficiently. What Comes Next As we close one chapter with 30x25 and open the next with this new rack-scale goal, we remain committed to transparency, accountability, and measurable progress. This approach sets AMD apart and is necessary as we advance how the industry approaches efficiency as demand and deployment of AI continues to expand. We're excited to keep pushing the limits, not just of performance, but also what's possible when efficiency leads the way. As the goal progresses, we will continue to share updates on our progress and the effects these gains are enabling across the ecosystem. Footnotes Visit 3BL Media to see more multimedia and stories from AMD


Forbes
an hour ago
- Forbes
AMD Accelerates AI Data Centers With Instinct And Helios
Today, AMD held its Advancing AI event in San Jose, California. This year's event centered around the launch of the new Instinct MI350 series GPU accelerators for servers, advances to the company's ROCm (Radeon Open Compute) software development platform for the Instinct accelerators, and AMD's data center system roadmap. Disclosure: My company, Tirias Research, has consulted for AMD and other companies mentioned in this article. The Instinct MI350 & MI355X GPU accelerator specs AMD First up is the latest in the Instinct product line, the MI350 and MI355X. Like its main competitor in the AI segment, AMD has committed to an annual cadence for new server AI accelerators. The MI350 and MI355X are the latest and are based on the new CDNA 4 architecture. The MI350 is a passively cooled solution that utilizes heat sinks and fans, whereas the MI355X is a liquid-cooled solution that employs direct-to-chip cooling. The liquid cooling system provides two significant benefits: the first is an increase in Total Board Power (TBP) from 1000W to 1400W, and the second is an increase in rack density from 64 GPUs per rack to up to 128 GPUs per rack. According to AMD, the MI350 series of GPU accelerators provides approximately a 3x improvement in both AI training and inference over the previous MI300 generation, with competitive performance equal to or better than the competition on select AI models and workloads. (Tirias Research does not provide competitive benchmark information unless it can verify it). Structurally, the MI350 series is similar to the previous MI300 generation, utilizing 3D hybrid bonding to stack an Infinity Fabric die, two I/O dies, and eight compute dies on top of a silicon interposer. The most significant changes are the shift to the CDNA 4 compute architecture, the use of the latest HBM3E high-speed memory, and architectural enhancements to the I/O, which resulted in two dies rather than four. The various dies are manufactured on TSMC's N3 and N6 process nodes. The result is an increase in performance efficiency throughout the chip while maintaining a small footprint. New ROCm 7 features AMD The Second significant announcement, or group of announcements, is around ROCm, AMD's open-source software development platform for GPUs. The release of ROCm 7 demonstrates just how far the software platform has come. One of the most significant changes is the ability to run PyTorch natively on Windows on an AMD-enabled PC, a huge plus for developers, and making ROCm truly portable across all AMD platforms. ROCm now supports all major AI frameworks and models, including 1.8 million models on Hugging Face. ROCm 7 also provides an average of 3 times better training performance than ROCm 6 on leading industry models and 3.5 times higher inference performance. In addition the enhancements to ROCm, AMD is doing more outreach to developers, including a developer track at the Advancing AI event, and the availability of the new AMD Developer Cloud accessed through GitHub. Helios AI Rack AMD The third major announcement was the forthcoming rack architecture, scheduled for 2026, called Helios. Like the rest of the industry, AMD is shifting its system focus to the rack as the platform, rather than just the server tray. The Helios will be a new rack architecture based on the latest AMD technology for processing, AI, and networking. Helios will feature the Zen 6 Epyc processor, the Instinct MI400 GPU accelerator based on the CDNA Next architecture, and the Pensando Vulcano AI NIC for scale-out networking. For scale-up networking between GPU accelerators within a rack, Helios will leverage UALink. The UALink 1.0 specification was released in April. Marvell and Synopsys have both announced the availability of UALink IP, and switch chips are anticipated from several vendors, including UALink partners like Astera Labs and Cisco. Additionally, an A-list of partners and customers joined AMD at Advancing AI, including Astera Labs, Cohere, Humain, Meta, Marvell, Microsoft, OpenAI, Oracle, Red Hat, and xAI. Humain was the most interesting because of its joint venture with AMD and other silicon vendors to build an AI infrastructure in Saudi Arabia. Humain has already begun the construction of eleven data centers with plans to add 50MW modules every quarter. Key to Humain's strategy is leveraging the abundant power and young labor force in Saudi Arabia. There is much more detail behind these and the extensive list of partnership announcements, but these three underscore AMD's dedication to remaining competitive in data center AI solutions, demonstrate its consistent execution, and reinforce its position as a viable alternative provider of data center GPU accelerators and AI platforms. As the tech industry struggles to meet the demand for AI, AMD continues to enhance its server platforms to meet the needs of AI developers and workloads. While this does not leapfrog the competition, it does narrow the gap in many respects, making AMD the most competitive alternative to Nvidia.