logo
#

Latest news with #NeuroEdge

Millennium M2000 Steals The Show At Cadence Live 2025
Millennium M2000 Steals The Show At Cadence Live 2025

Forbes

timea day ago

  • Business
  • Forbes

Millennium M2000 Steals The Show At Cadence Live 2025

After last year's Cadence Live event, it was going to be hard to match the energy and star power of CEO guests like Nvidia's Jensen Huang and Qualcomm's Cristiano Amon. This year, Cadence brought back Huang, but instead of Amon, Cadence had Intel's Lip-Bu Tan. It was a homecoming for Tan, who helped turn around Cadence as its CEO for over a decade. He is now CEO of Intel and trying to turn around that company; he also recently had Cadence on stage at the Intel Foundry Direct Connect event to talk about their partnership. (Note: Cadence is an advisory client of my firm, Moor Insights & Strategy.) At the event, Cadence announced a slew of new products from across the range of the company's offerings. As Cadence and many other tech companies embrace a system-level approach to solving some of the world's most complex design problems, we've seen a shift towards more integrated solutions. This is certainly true for Cadence, whether that solution is a software product, an IP design or a hardware platform. When I spoke with Cadence's CEO Anirudh Devgan, it was clear that the company wants to expand its IP capabilities and enhance its engagement with customers on building out its portfolio. In line with this approach, Cadence announced its new Tensilica NeuroEdge 130 AI Co-Processor, which is specifically targeted at agentic AI applications. This processor is designed to complement the NPU by allowing it to offload some simpler tasks. Cadence says the co-processor provides a 30% area savings and 20% reduction in power compared to Tensilica's Vision DSPs. The new co-processor can work with any NPU, whether it's based on Cadence's Neo NPU IP or comes from a third party. In addition to the new IP, Cadence announced the new Millennium M2000 Supercomputer with Nvidia Blackwell GPUs. This built on last year's launch of the Millennium M1, which combined Cadence's Enterprise Multiphysics platform with advanced computational fluid dynamics capabilities, accelerated by both CPUs and GPUs. On stage at Cadence Live, Huang talked about how excited he was for the Millennium M2000, so much so that he publicly proclaimed a purchase order for ten M2000s right then. I also learned from the Cadence and Nvidia teams that the M2000 will come in two flavors, one with HGX B200 clusters and one that uses RTX Pro 6000 Blackwell GPUs. The difference is that the RTX Pro is more of a blend of visualization and computation, while the HGX B200 is purely for computation. This means that each customer can buy the right M2000 based on its specific use case and whether visualization is key to its needs. The impressive thing about the M2000 is that it helps to address the three core areas of Cadence's business: silicon design, system design and drug discovery. Cadence claims that a single M2000 replaces 10,000 CPUs for silicon design, 80,000 CPUs for airframe simulations and 150,000 CPUs for molecular discovery. Cadence customer MediaTek even said that the M2000 enabled previously impossible simulations for dynamic IR, while customer Supermicro said the M2000 delivered 8x faster thermal simulations for its server designs. The use case that really stuck with me was aerospace, specifically the idea that a single M2000 can simulate a multibillion-cell airframe in under 24 hours — something that would normally take a Top 500 supercomputer eight days. The Milennium M2000 is enabling aerospace startups like Boom Aero to take complex CFD simulations from one week down to two days. Many startups in the defense and aerospace industry cannot afford to build massive supercomputers to compete with prime contractors such as General Atomics, Northrop Grumman or Lockheed Martin. They must be agile, yet also still have access to the simulations necessary to test their ideas and iterate quickly. I believe that having access to a solution like the M2000 could give smaller American aerospace and defense companies an opportunity to compete without needing to stand up entire datacenters. During his keynote, Devgan spoke about AI more times than I could count. He recapped the company's various AI platforms, which include JedAI with its various agentic AIs. But what I really liked was his visual representation of how a silicon agent might utilize creation and execution agents that in turn control other subsystem agents to deliver different levels of automation. Cadence has created an autonomous system-level ranking — much like the car industry has for autonomous driving, but instead of driving a car, it's building a chip. He spoke about how Cadence expects that AI-driven chip design will likely reach an inflection point this year, with the majority of advanced chips being designed using AI-enabled tools. He also talked about how agentic AI will increase the number of chip designs and further shorten time-to-market. In addition to its broader AI approach, Cadence also announced the Cerebrus AI Studio, which is one of the agentic AIs specifically targeted at helping implement digital designs. Cerebrus AI Studio uses agentic AI to improve PPA optimization, improving power consumption by 10%, density by 1% and timing by 30%. Samsung used this to achieve a 4x improvement in productivity for one project and an 11% improvement to PPA in AI subsystems. STMicroelectronics also optimized multiple SoCs in one platform. This is an area where Cadence has had room for improvement compared to Synopsys, so I'm glad to see the company forging ahead to improve this part of its overall offerings. I believe that Cadence is doing a good job of continuing to expand its capabilities and evolve into a systems solution company, attacking both bigger and smaller problems at the same time. I get really excited about the possibilities of something like the Millennium M2000 supercomputer, especially when I think about its potential applications in science, aerospace and computing. Cadence's levels of AI-automated chip design Anshel Sag That said, Cadence is also still very much an EDA tools provider, and it's good to see the company leaning into that with more AI-enabled tools and a more complete agentic approach to chip design. To be most successful, the company must continue to balance its expansion into solving ultra-difficult scientific and engineering challenges using its supercomputers with maintaining its stronghold in EDA for chip makers. Cadence continues to be one of the most important companies in the world for enabling chip design; based on its latest announcements, that shows no signs of slowing.

Speeding AI With Co-Processors
Speeding AI With Co-Processors

Forbes

time07-05-2025

  • Business
  • Forbes

Speeding AI With Co-Processors

An artists conception of a high-speed chip Cadence Design Most chips today are built from a combination of customized logic blocks that deliver some special sauce, and off-the-shelf blocks for commonplace technologies such as I/O, memory controllers, etc. But there is one needed function that has been missing; an AI Co-Processor. In AI, the special sauce has been the circuits that do the heavy-lifting of parallel matrix operations. However, other types of operations used in AI do not lend themselves well to such matrix and tensor operators and silicon. These scalar and vector operators for computing activations and averages are typically calculated on a CPU or a digital signal processor (DSP) to speed vector operations. Designers of custom AI chips often use a network processor unit coupled with a DSP block from companies like Cadence or Synopsys to accelerate scalar and vector calculations. However, these DSPs also include many features that are irrelevant to AI. Consequently, designers are spending money and power on unneeded features. (Both Cadence and Synopsys are clients of Cambrian-AI Research.) Large companies that design custom chips address this by building in their own AI Co-Processor. Nvidia Orin Jetson uses a vector engine called PVA, Intel Gaudi uses its own vector processor within its TPCs, Qualcomm Snapdragon has its vector engine within the Hexagon accelerator, as does the Google TPU. AI Co-Processors work alongside AI matrix engines in many accelerators today. Cadence Design But what if you are an automotive, TV, or edge infrastructure company designing your own AI ASIC for a specific application? Until now, you had to either design your own co-processor, or license a DSP block and only use part of it for your AI needs. The New AI Co-Processor Building Block Cadence Design has now introduced an AI Co-Processor, called the Tensilica NeuroEdge, which can deliver roughly the same performance of a DSP but consumes 30% less die area (cost) on an SoC. Since NeuroEdge was derived from the Cadence Vision DSP platform, it is fully supported by an existing robust software stack and development environment. An AI SoC can have CPUs, AI block like GPUs, Vision processors, NPUs, and now AI co-processors to ... More accelerate the entire AI workload. Cadence Design The new co-processor can be used with any NPU, is scalable, and helps circuit design teams get to market faster with a fully tested and configurable block. Designers will combine CPUs from Arm or RISC-V, NPUs from EDA firms like Synopsys and Cadence, and now the 'AICP' from Cadence, all off-the-shelf designs and chiplets. The NeuroEdge AI Co-processor Cadence Design The AICP was born from the Vision DSP, and is configurable to meet a wide-range of compute needs. The NeuroEdge supports up to 512 8x8 MACs with FP16, 32, and BD16 support. It connects with the rest of the SoC using AXI, or using Cadence's HBDO (High-Bandwidth Interface). Cadence has high hopes for NeuroEdge in the Automotive market, and is ready for ISO 26262 Fusa certification. An architecural overview of the AI Co-Processor Cadence Design NeuroEdge fully supports the NeuroWeave AI compiler toolchain for fast development with a TVM-based front-end. The software stack for development of AI applications using the AI Co-processor. Cadence Design Our Takeaway With the rapid proliferation of AI processing in physical AI applications such as autonomous vehicles, robotics, drones, industrial automation and healthcare, NPUs are assuming a more critical role. Today, NPUs handle the bulk of the computationally intensive AI/ML workloads, but a large number of non-MAC layers include pre- and post-processing tasks that are better offloaded. Current CPU, GPU and DSP solutions required tradeoffs, and the industry needs a low-power, high-performance solution that is optimized for co-processing and allows future proofing for rapidly evolving AI processing needs. Cadence is the first to take that step. Disclosures: This article expresses the opinions of the author and is not to be taken as advice to purchase from or invest in the companies mentioned. My firm, Cambrian-AI Research, is fortunate to have many semiconductor firms as our clients, including Baya Systems BrainChip, Cadence, Cerebras Systems, D-Matrix, Esperanto, Flex, Groq, IBM, Intel, Micron, NVIDIA, Qualcomm, Graphcore, Synopsys, Tenstorrent, Ventana Microsystems, and scores of investors. I have no investment positions in any of the companies mentioned in this article. For more information, please visit our website at

Cognizant Technology Solutions Corp (CTSH) Q4 2024 Earnings Call Highlights: Strong Revenue ...
Cognizant Technology Solutions Corp (CTSH) Q4 2024 Earnings Call Highlights: Strong Revenue ...

Yahoo

time06-02-2025

  • Business
  • Yahoo

Cognizant Technology Solutions Corp (CTSH) Q4 2024 Earnings Call Highlights: Strong Revenue ...

Revenue: $5.1 billion, up 6.7% year over year in constant currency for Q4 2024. Full-Year Revenue: $19.7 billion, increased 1.9% year over year in constant currency. Adjusted Operating Margin: 15.7% for Q4 2024; full-year margin of 15.3%. Health Sciences Revenue Growth: Over 10% year over year. Financial Services Revenue Growth: Approximately 3% year over year. Free Cash Flow: $837 million for Q4 2024; $1.8 billion for the full year. Cash and Short-term Investments: $2.2 billion at year-end. Net Cash: $1.3 billion at year-end. Bookings Growth: 11% year over year for Q4 2024. Full-Year EPS Guidance for 2025: $4.90 to $5.06, representing 3% to 7% growth. Expected Revenue Growth for 2025: 2.6% to 5.1% or 3.5% to 6% in constant currency. Expected Adjusted Operating Margins for 2025: 15.5% to 15.7%. Capital Returned to Shareholders in 2024: $1.2 billion through share repurchases and dividends. Warning! GuruFocus has detected 7 Warning Signs with CTSH. Release Date: February 05, 2025 For the complete transcript of the earnings call, please refer to the full earnings call transcript. Cognizant Technology Solutions Corp (NASDAQ:CTSH) reported strong year-over-year revenue growth of 6.7% in constant currency for Q4 2024, driven by large deal signings and improved organic growth. The company successfully completed its NextGen program, which contributed to an improved adjusted operating margin of 15.7% for the quarter. Cognizant Technology Solutions Corp (NASDAQ:CTSH) expanded its AI capabilities significantly, introducing new platforms like Flowsource, Neuro Edge, and Neuro Cybersecurity, enhancing its service offerings. The company achieved a historic high in client satisfaction scores (NPS) in 2024, indicating strong customer relationships and service quality. Strategic acquisitions, such as Thirdera and Belcan, have strengthened Cognizant Technology Solutions Corp (NASDAQ:CTSH)'s market position and contributed to its growth in new end markets. Despite the positive revenue growth, the adjusted operating margin for the year declined by 40 basis points year-over-year, primarily due to the impact of acquisitions and increased compensation costs. The Products and Resources segment faced pressure due to a cautious discretionary environment across end markets like automotive, aerospace, and manufacturing. Communication, Media, and Technology segments experienced consistent pressure as clients focused on cost optimization, impacting discretionary spending. The company anticipates a modest sequential margin decline in Q1 2025 due to typical seasonality, which may affect short-term profitability. Cognizant Technology Solutions Corp (NASDAQ:CTSH) faces challenges in maintaining momentum in smaller deals, which are crucial for immediate revenue realization. Q: Can you discuss the momentum in large deal bookings and your outlook for 2025? A: Ravi Singisetti, CEO, highlighted that Cognizant signed 29 large deals in 2024, up from 17 in 2023. The company has a strong book-to-bill ratio of 1.4 and is seeing a return of smaller deals, which are monetized within the same year. This momentum is expected to continue into 2025, with a broad spread across industries and geographies, making it resilient and sustainable. Q: How does the growth outlook for 2025 compare to 2024, particularly across different industries? A: Ravi Singisetti, CEO, stated that growth is expected to be broad-based across industries. Healthcare showed over 10% year-over-year growth, and Financial Services is seeing a return of discretionary spending. The company is also seeing growth in communications, technology, and manufacturing, bolstered by the Belcan acquisition. The Americas remain a principal market, but international markets are starting to contribute more significantly. Q: What are Cognizant's plans for AI investments and how do they impact the IT services industry? A: Ravi Singisetti, CEO, explained that Cognizant is investing in AI to enhance productivity and innovation. The company has 1,200 AI projects underway, focusing on cloud migration, data modernization, and agentification. Cognizant's AI platforms are helping clients accelerate their digital transformation, and the company sees AI as a force multiplier that will unlock new service pools and expand addressable spend. Q: Can you elaborate on the margin performance and expectations for 2025? A: Jatin Dalal, CFO, noted that the fourth-quarter adjusted operating margin was 15.7%, driven by the NextGen program and operational improvements. The company expects margins to expand by 20 to 40 basis points in 2025, supported by AI-led productivity, continued operational rigor, and successful execution of large deals. Q: How is Cognizant approaching hiring and talent management in 2025? A: Jatin Dalal, CFO, mentioned that Cognizant will continue to hire as needed to support growth, with an expected increase in headcount from the first quarter of 2025. The company is well-positioned in terms of talent availability across geographies and plans to leverage its capabilities to meet demand. For the complete transcript of the earnings call, please refer to the full earnings call transcript. This article first appeared on GuruFocus. Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store