Latest news with #SpirentCommunications


Time of India
3 days ago
- Business
- Time of India
Keysight to divest Spirent units to VIAVI as part of US consent agreement
Electronic equipment maker Keysight Technologies on Tuesday said it will divest Spirent Communications ' high-speed ethernet and other business lines to VIAVI Solutions as part of a consent agreement with the U.S. Department of Justice. On Monday, the DOJ required Keysight to divest three of Spirent's businesses, including high-speed ethernet, network security, and channel emulation business lines, to address antitrust concerns tied to its acquisition of the British telecommunications firm for a purchase originally valued at $1.5 billion.


Axios
3 days ago
- Business
- Axios
Trump changes the antitrust formula
Keysight Technologies will receive U.S. antitrust approval for its $1.5 billion purchase of British telecom testing firm Spirent Communications, after agreeing to divest three of Spirent's businesses. Why it matters: Remedies appear to be back, after a Biden-era antitrust regime that focused more on injunctions. Both at DOJ, which handled the Keysight/Spirent deal, and also at FTC, which last week applied structural remedies to approving the $35 billion merger of Synopsys and Ansys. "If competitive concerns are discrete and a robust carve-out is feasible, settlement is once again a realistic path to closing," antitrust attorney John Ceccio wrote on LinkedIn.

Yahoo
4 days ago
- Business
- Yahoo
U.S. to require asset divestitures for Keysight purchase of Spirent
(Reuters) -The U.S. government on Monday said it will require asset divestitures before allowing Keysight Technologies to complete its all-cash purchase of Spirent Communications. In court filings, the U.S. Department of Justice said the proposed merger of two of the largest global providers of high speed ethernet testing, network security testing and radio frequency channel emulators might otherwise substantially lessen competition and harm customers, violating antitrust law. Keysight, based in Santa Rosa, California, agreed in March 2024 to buy U.K.-based Spirent for 1.16 billion British pounds (now about $1.56 billion). Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Forbes
31-03-2025
- Business
- Forbes
How AI Data Centers Are Shaping The Future Of Ethernet
Eric Updyke, CEO at Spirent Communications. getty The high-speed ethernet (HSE) industry is undergoing an unprecedented boom, and we only need two letters to explain why: AI. Millions of graphics processing units (GPUs) and other accelerators are being deployed for AI infrastructures. Data center capital investment is climbing to half a trillion dollars. Shipments of 400G and 800G ethernet ports are exploding, exceeding even the most optimistic analyst projections. We're witnessing a paradigm shift that will transform the ethernet ecosystem. But how exactly will this transformation play out? Which technologies will dominate data center infrastructures in the coming years, and why? What do companies building out AI clusters know today that they didn't before? And why is testing for these infrastructures proving so difficult—to the point that doing it effectively is becoming a strategic advantage? In 2023, we worked with stakeholders across the HSE ecosystem—hyperscalers, service providers, enterprises, network equipment manufacturers (NEMs) and others—totaling 340 engagements worldwide. Through this work, we've learned a great deal about where the industry is headed. The bottom line? This is ethernet's most consequential evolution in decades. According to Dell'Oro proprietary research, the HSE market will grow from 70 million ports shipped in 2023 to more than 240 million between 2024 and 2026. This growth will be driven from the top, as hyperscalers demand faster, more efficient networks to support exponential growth in cloud and AI traffic. But demand is surging across the rest of the ecosystem as well. Why are operators choosing high-speed ethernet for AI clusters? This hasn't always been the case. Historically, some used ethernet, while others favored lossless InfiniBand (especially for large model-training clusters) and others used their own proprietary connectivity. Increasingly though, ethernet is taking the lead for these networks. It's projected that ethernet port shipments will overtake InfiniBand by 2028. Port speed evolution depends on which 'AI network' you mean. Front-end infrastructures that ingest training data, for example, will largely continue using 400G ethernet through 2025. For back-end AI training and inferencing networks, however—the networks connecting all those specialized GPUs—the future starts now. According to Dell'Oro Group, the majority of switch ports deployed in AI back-end networks will be 800G ethernet by 2025 and 1.6-terabit ethernet by 2027. To understand why AI is having such a profound effect on data centers and the HSE market, we need to appreciate just how extreme the demands are that these workloads place on data center networks. According to Dell'Oro Group, multiple large AI models already process trillions of dense parameters, and that number is increasing tenfold every year. To meet this explosive demand, data center operators are deploying GPUs and other accelerators (xPUs) as quickly as possible, scaling to thousands, even tens of thousands of distributed nodes. And they're building separate, scalable back-end ethernet networks to connect them, increasingly via spine-leaf architectures using RDMA over converged ethernet version 2 (RoCEv2) protocol. These back-end networks demand extreme scalability and bandwidth approaching 1 Tbps per xPU, but that's just the start. They must support thousands of synchronized jobs in parallel, bursty east-west traffic patterns and data- and compute-intensive workloads. Critically, they must deliver extremely low network latency with zero packet loss to optimize job completion times, since even a single delayed flow can impede all nodes in the cluster. Why are packet loss and latency so deadly for AI workloads? It's a function of the massive investments operators are making to build these infrastructures. Look at it this way: When an AI cluster reaches a scale of thousands of distributed xPUs, the back-end network effectively becomes the computer. If it's not operating efficiently, those delays translate to serious costs. A 1% packet loss rate, for example, can degrade performance by 30% or more. If you spend $1 billion to build an AI infrastructure, and your xPUs are sitting idle a third of the time, that equates to hundreds of millions in lost value over the life of that investment. It's why network performance and efficiency are so critical to AI—and ultimately, to every data center, service provider and enterprise network running AI workloads. It's among the biggest lessons customers have learned in this fast-evolving space. To meet this challenge, ethernet itself must evolve. Indeed, multiple lossless ethernet efforts are now under way, most notably the ultra-ethernet transport (UET) specification that optimizes congestion control and RDMA over ethernet for AI workloads. But operators must also be able to conduct exhaustive performance testing and validation for planned network designs—ideally before deployment. This is more difficult than it might seem. Previously, the only real way to test AI fabrics was with actual AI traffic running on full-scale server farms. Effectively, you needed an AI data center to test an AI data center—an exorbitantly expensive proposition, if even possible. Fortunately, the state of the art is evolving here, too. Today, a new generation of AI network testing solutions can help organizations thoroughly stress-test these fabrics and identify potential bottlenecks before they deploy. These innovations are helping the AI leaders—and soon, other parts of the ecosystem—continually test and verify both planned and existing AI infrastructures. They give organizations a means to optimize network performance at a much lower cost, so that everyone—including stakeholders across the ethernet ecosystem—can benefit from the AI revolution. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?