logo
#

Latest news with #GiladShainer

Lumentum Expands U.S. Manufacturing for AI-Driven Co-Packaged Optics
Lumentum Expands U.S. Manufacturing for AI-Driven Co-Packaged Optics

Business Wire

time07-08-2025

  • Business
  • Business Wire

Lumentum Expands U.S. Manufacturing for AI-Driven Co-Packaged Optics

SAN JOSE, Calif.--(BUSINESS WIRE)--Lumentum Holdings Inc. ("Lumentum"), a global leader in optical and photonic technology, today announced a significant milestone in its commitment to innovation and U.S.-based manufacturing. The company will fund a major capacity expansion of its U.S.-based semiconductor facility. This initiative is expected to generate additional high-skilled engineering and manufacturing jobs, further strengthening the U.S. position in the global AI supply chain. Lumentum is a primary supplier to the industry of ultra-high-power (UHP) lasers, essential components in Co-Packaged Optics (CPO) platforms. The UHP laser—an ultra-reliable indium phosphide product—is designed and manufactured at Lumentum's Rose Orchard Way semiconductor facility in San Jose, California. Backed by decades of experience in high-power telecom lasers, the UHP laser supports low-power, highly resilient optical networking systems that are foundational to modern AI data centers. 'This investment is a testament to our leadership in laser and photonic technologies,' said Michael Hurlston, president and CEO of Lumentum. 'Our commitment to expanding domestic manufacturing not only supports a robust AI infrastructure supply chain but also reinforces America's role in global technology leadership.' Lumentum has long been at the forefront of photonic innovation, advancing telecommunications, data centers, and industrial applications. With a foundation rooted in laser and optical technologies, the company enables the world's most advanced systems, meeting the evolving demands of artificial intelligence, machine learning, and real-time data processing in an era of rapid technological change. Lumentum is working with NVIDIA on advanced networking technologies for AI infrastructure. 'With AI transforming every industry, the demand for high-performance, energy-efficient optical interconnects is growing rapidly,' said Gilad Shainer, senior vice president, Networking, NVIDIA. 'NVIDIA is working closely with industry innovators like Lumentum to deliver improved power efficiency and network resiliency for the AI factories of the future.' About Lumentum Lumentum (NASDAQ: LITE) is a market-leading designer and manufacturer of innovative optical and photonic products enabling optical networking and laser applications worldwide. Lumentum optical components and subsystems are part of virtually every type of telecom, enterprise, and data center network. Lumentum lasers enable advanced manufacturing techniques and diverse applications, including next-generation 3D sensing capabilities. Lumentum is headquartered in San Jose, California, with R&D, manufacturing, and sales offices worldwide. For more information, visit and follow Lumentum on Bluesky, Facebook, Instagram, LinkedIn, X, and YouTube. Category: Financial

Nvidia's ‘most underappreciated' business is taking off like a 'rocket ship'
Nvidia's ‘most underappreciated' business is taking off like a 'rocket ship'

Yahoo

time06-08-2025

  • Business
  • Yahoo

Nvidia's ‘most underappreciated' business is taking off like a 'rocket ship'

When Nvidia (NVDA) reports its second quarter earnings on Aug. 27, investors will focus squarely on the company's data center results. After all, that's where the chip giant realizes revenue on the sale of its high-powered AI processors. But the Data Center segment includes more than just chip sales. It also accounts for some of Nvidia's most important, though often overlooked, offerings: its networking technologies. Composed of its NVLink, InfiniBand, and Ethernet solutions, Nvidia's networking products are what allow its chips to communicate with each other, let servers talk to each other inside massive data centers, and ultimately ensure end users can connect to it all to run AI applications. 'The most important part in building a supercomputer is the infrastructure. The most important part is how you connect those computing engines together to form that larger unit of computing,' explained Gilad Shainer, senior vice president of networking at Nvidia. That also translates into some big sales. Nvidia's networking sales accounted for $12.9 billion of its $115.1 billion in data center revenue in its prior fiscal year. That might not seem impressive when you consider that chip sales brought in $102.1 billion, but it eclipses the $11.3 billion that Nvidia's second-largest segment, Gaming, took in for the year. In Q1, networking made up $4.9 billion of Nvidia's $39.1 billion in data center revenue. And it'll continue to grow as customers continue to build out their AI capacity, whether that's at research universities or massive data centers. 'It is the most underappreciated part of Nvidia's business, by orders of magnitude,' Deepwater Asset Management managing partner Gene Munster told Yahoo Finance. 'Basically, networking doesn't get the attention because it's 11% of revenue. But it's growing like a rocket ship.' Connecting thousands of chips When it comes to the AI explosion, Nvidia senior vice president of networking Kevin Deierling says the company has to work across three different types of networks. The first is its NVLink technology, which connects GPUs to each other within a server or multiple servers inside of a tall, cabinet-like server rack, allowing them to communicate and boost overall performance. Then there's InfiniBand, which connects multiple server nodes across data centers to form what is essentially a massive AI computer. Then there's the front-end network for storage and system management, which uses Ethernet connectivity. 'Those three networks are all required to build a giant AI-scale, or even a moderately sized enterprise-scale, AI computer,' Deierling explained. The purpose of all of these various connections isn't just to help chips and servers communicate, though. They're also meant to allow them to do so as fast as possible. If you're trying to run a series of servers as a single unit of computing, they need to talk to each other in the blink of an eye. A lack of data going to GPUs slows the entire operation, delaying other processes and impacting the overall efficiency of an entire data center. '[Nvidia is a] very different business without networking,' Munster explained. 'The output that the people who are buying all the Nvidia chips [are] desiring wouldn't happen if it wasn't for their networking. ' And as companies continue to develop larger AI models and autonomous and semi-autonomous agentic AI capabilities that can perform tasks for users, making sure those GPUs work in lockstep with each other becomes increasingly important. That's especially true as inferencing — running AI models — requires more powerful data center systems. Inferencing powers up The AI industry is in the midst of a broad reordering around the idea of inferencing. At the onset of the AI explosion, the thinking was that training AI models would require hugely powerful AI computers and that actually running them would be somewhat less power-intensive. That led to some trepidation on Wall Street earlier this year, when DeepSeek claimed that it trained its AI models on below top-of-the-line Nvidia chips. The thinking at the time was that if companies could train and run their AI models on underpowered chips, then there was no need for Nvidia's pricey high-powered systems. But that narrative quickly flipped as chip companies pointed out that those same AI models benefit from running on powerful AI computers, allowing them to reason over more information more quickly than they would while running on less-advanced systems. 'I think there's still a misperception that inferencing is trivial and easy,' Deierling said. 'It turns out that it's starting to look more and more like training as we get to [an] agentic workflow. So all of these networks are important. Having them together, tightly coupled to the CPU, the GPU, and the DPU [data processing unit], all of that is vitally important to make inferencing a good experience.' Nvidia's rivals are, however, circling. AMD is looking to grab more market share from the company, and cloud giants like Amazon, Google, and Microsoft continue to develop their own AI chips. Industry groups also have their own competing networking technologies including UALink, which is meant to go head-to-head with NVLink, explained Forrester analyst Alvin Nguyen. But for now, Nvidia continues to lead the pack. And as tech giants, researchers, and enterprises continue to battle over Nvidia's chips, the company's networking business is all but guaranteed to keep growing as well. Email Daniel Howley at dhowley@ Follow him on X/Twitter at @DanielHowley.

Coherent Recognized as One of NVIDIA's Ecosystem Innovation Collaborators for Co-Packaged Optics at GTC
Coherent Recognized as One of NVIDIA's Ecosystem Innovation Collaborators for Co-Packaged Optics at GTC

Yahoo

time19-03-2025

  • Business
  • Yahoo

Coherent Recognized as One of NVIDIA's Ecosystem Innovation Collaborators for Co-Packaged Optics at GTC

PITTSBURGH, March 18, 2025 (GLOBE NEWSWIRE) -- Coherent Corp. (NYSE: COHR), a global leader in photonics, announced its collaboration with NVIDIA on silicon photonics networking switches using co-packaged optics (CPO). This ecosystem, announced at GTC, will allow AI factories to connect millions of GPUs. 'We are pleased to be NVIDIA's collaborator on this new transceiver form factor,' said Jim Anderson, CEO of Coherent. 'We expect that CPO will further accelerate the expansion of optical networking in the datacenter.' 'AI factories are growing and networking infrastructure must evolve to keep pace,' said Gilad Shainer, senior vice president of Networking at NVIDIA. 'NVIDIA's collaboration with innovators, such as Coherent, on silicon photonics will propel the next generation of AI.' 'As datacenter networking speeds and bandwidth continue to rapidly increase, the importance of optical networking in datacenter architectures will continue to grow. Co-packaging technology is expected to add to the expansion of the datacenter optical networking market for years to come,' said Pat Moorhead, Founder and CEO of Moor Insights & Strategy. 'As a market leader in optical networking, Coherent is well-positioned to take advantage of this trend.' Coherent will showcase its comprehensive range of optical networking technologies at OFC 2025, April 1-3 in San Francisco, Calif., including CPOs, and CPO-enabling lasers and components. About Coherent Coherent empowers market innovators to define the future through breakthrough technologies, from materials to systems. We deliver innovations that resonate with our customers in diversified applications for the industrial, communications, electronics, and instrumentation markets. Coherent has research and development, manufacturing, sales, service, and distribution facilities worldwide. For more information, please visit us at Media Contactinnovations@ in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store