logo
#

Latest news with #Gaudi3

Intel unveils new Arc Pro GPUs & Gaudi 3 accelerators for AI
Intel unveils new Arc Pro GPUs & Gaudi 3 accelerators for AI

Techday NZ

time21-05-2025

  • Business
  • Techday NZ

Intel unveils new Arc Pro GPUs & Gaudi 3 accelerators for AI

Intel has introduced new GPUs for professionals and AI developers and announced wider availability for its Gaudi 3 AI accelerators. The company unveiled the Intel Arc Pro B60 and Arc Pro B50 GPUs, expanding the Arc Pro product family for AI inference and professional workstation use. These new graphics cards feature Intel's Xe2 architecture, combining Xe Matrix Extensions (XMX) AI cores and advanced ray tracing units. According to Intel, these specifications are aimed at creators, engineers and AI developers who require high memory and scalable performance for demanding workloads. The Arc Pro B60 and Arc Pro B50 GPUs are designed to address both architecture, engineering and construction (AEC) demands as well as AI inference tasks. The new GPUs offer 24GB and 16GB of memory respectively and allow for multi-GPU configurations. Intel stated that the Arc Pro B-Series is built to offer stability and performance through independent software vendor (ISV) certifications and supports consumer and professional drivers on Windows. On Linux platforms, the Arc Pro B-Series GPUs incorporate a containerised software stack that simplifies AI deployment. Intel indicated that this stack will be enhanced over time with further features and optimisations, allowing for broader compatibility and performance improvements. The combination of higher memory capacity and targeted software support is aimed at delivering a scalable, cost-efficient computing platform for professionals working with AI and advanced workstation applications. Intel also revealed a new platform code-named Project Battlematrix, a workstation-class Xeon-based system supporting up to eight Arc Pro B60 24GB GPUs. This configuration provides up to 192GB of video RAM and is designed to support medium-sized AI models with up to 150 billion parameters. Intel asserts that this solution is intended to reduce typical friction points experienced by AI developers working on sizeable model deployments. Vivian Lien, Vice President and General Manager of Client Graphics at Intel, commented: "The Intel Arc Pro B-Series showcases Intel's commitment in GPU technology and ecosystem partnerships. With Xe2 architecture's advanced capabilities and a growing software ecosystem, the new Arc Pro GPUs deliver accessibility and scalability to small and medium-sized businesses that have been looking for targeted solutions." Intel plans to begin sampling the Arc Pro B60 GPU through add-in board partners such as ASRock, Gunnir, Lanner, Maxsun, Onix, Senao and Sparkle starting from June 2025. The Arc Pro B50 GPU will be available from Intel-authorised resellers from July 2025. In addition to the GPU announcements, Intel introduced expanded deployment options for its Gaudi 3 AI accelerators. These will be available both as PCIe cards and in rack-scale system reference designs, with the PCIe cards designed to provide scalable AI inferencing within existing data centre infrastructure. Intel stated that these cards will allow organisations from small businesses to large enterprises to run a range of models, from Llama 3.1 8B to larger deployments such as full-scale Llama 4 Scout or Maverick models, thanks to various scalable configurations. The Gaudi 3 PCIe cards are scheduled for availability in the second half of 2025. The rack-scale reference designs for Gaudi 3 support up to 64 accelerators per rack, with 8.2TB of high-bandwidth memory. The system employs an open, modular architecture designed to help organisations avoid vendor lock-in and simplify system maintenance, including liquid cooling to manage thermal performance and lower total cost of ownership. For cloud service providers, Intel noted that Gaudi 3's rack-scale approach supports both custom and Open Compute Project (OCP) infrastructures, offering flexibility and support for large-scale model training and real-time inference. The announcements coincide with Intel's 40th anniversary of operations in Taiwan. Lip-Bu Tan, Intel's Chief Executive Officer, said: "For the past 40 years, the power of our partnership with the Taiwan ecosystem has fueled innovation that has changed our world for the better. This week, we are renewing our commitment to our partners as we work to build a new Intel for the future. Together, we will create great products that delight our customers and capitalize on the exciting opportunities ahead." Intel also announced the general availability of Intel AI Assistant Builder, a lightweight open framework for developing and deploying AI agents on Intel-based systems. The software, now available on GitHub, is aimed at allowing developers and partners to quickly build and roll out purpose-built AI agents, with recent examples incorporated by Acer and ASUS. The AI Assistant Builder is designed for use on local AI-capable PCs and, according to Intel, can be customised for organisational requirements and direct customer deployments. Intel stated that it will present its latest processor and graphics technologies at Computex 2025 in Taipei, including the new Arc Pro and Gaudi 3 products.

Intel launches new AI-centric GPUs at Computex 2025
Intel launches new AI-centric GPUs at Computex 2025

Hindustan Times

time20-05-2025

  • Business
  • Hindustan Times

Intel launches new AI-centric GPUs at Computex 2025

Intel, at the ongoing Computex 2025 event, has unveiled a brand new range of GPUs, primarily intended for AI and workstation-related use cases. These include new Intel Arc Pro GPUs as well as the Intel Gaudi 3 AI accelerators. These are professional-grade GPUs, primarily designed for AI workloads and to be used in work-centre environments. Alongside this, Intel has also launched its AI Assistant Builder, which happens to be open source. This is a framework for building and running AI agents on Intel-based AI PCs. It has been released on GitHub in a beta capacity. Here's what you need to know about the latest announcements. Also Read: Paytm introduces option to hide specific transactions from history: Here's how it works Intel has launched the Arc Pro B60 and Pro B50 GPUs, which are based on the Xe2 architecture. These feature Intel's Xe Matrix Extensions (XMX) AI cores and ray tracing units, enabling high-end performance for creators, developers, and engineers. Intel says that both GPUs are designed for today's demanding AI inference workloads and workstation applications. The Arc Pro B60 GPU comes with 24GB of memory, while the Arc Pro B50 GPU offers 16GB of memory. Both also support multi-GPU scalability. Intel also states that these GPUs are optimised for AEC (Architecture, Engineering, and Construction) and inference workloads. These GPUs are compatible with both consumer and professional drivers on Windows. On Linux, they support a containerised software stack to simplify AI deployments. Intel also announced new deployment options for its Gaudi 3 AI accelerators. The Gaudi 3 PCIe cards now support AI inferencing within existing data centre server environments. According to Intel, customers running AI models like Llama can benefit from the ability to run models ranging from Llama 3.1 8B to full-scale models like Llama 4 Scout or Maverick, thanks to the scalability of Intel's hardware. Intel says its Gaudi 3 Rack Scale System reference designs are built for scalability and flexibility, supporting up to 64 accelerators per rack and 8.2TB of bandwidth memory. The systems also feature a modular design to prevent vendor lock-in and utilise liquid cooling to ensure performance while keeping total cost of ownership low. Intel states that the Arc Pro B60 GPU will be sampled by board partners such as ASRock, Sparkle, Gunnir and others starting June 2025. The Arc Pro B50 GPU will be available from Intel-authorised resellers in July 2025, with broader availability expected in the second half of 2025. MOBILE FINDER: iPhone 16 LATEST Price, Specs, And More

Nvidia grasping to hold onto China's AI chip market
Nvidia grasping to hold onto China's AI chip market

Asia Times

time19-04-2025

  • Business
  • Asia Times

Nvidia grasping to hold onto China's AI chip market

Two days after the Trump administration restricted his company's sales to China, Nvidia CEO Jensen Huang was in Beijing on April 17 to salvage what he referred to as a very important market. Huang, whose visit merited a reception by Chinese Vice Premier He Lifeng at the Great Hall of the People, also met with Ren Hongbin, chairman of the China Council for the Promotion of International Trade (CCPIT), and Liang Wenfeng, founder and CEO of DeepSeek. He, who is also a member of the Politburo, said, 'We welcome more US firms, including Nvidia, to deepen their presence in the Chinese market and leverage their strengths here to gain an edge in global competition.' In reply, Huang said, 'We look forward to deepening our presence in China and supporting the advancement of the local tech ecosystem.' In a separate meeting arranged by CCPIT, Huang told Ren, 'We hope to continue to cooperate with China.' Chinese media also quoted him as saying that 'Nvidia will continue to make every effort to optimize its product lineup in compliance with regulatory requirements and will steadfastly serve the Chinese market.' In a meeting with Liang, Huang reportedly talked about how Nvidia might provide DeepSeek with AI processors that meet both the company's needs and regulatory requirements. Nvidia issued a statement saying, 'We regularly meet with government leaders to discuss our company's products and technology,' but these were not ordinary meetings. The company is now at the center of the increasingly acrimonious US-China trade and tech disputes. It was an eventful week for Nvidia. In the evening of April 15, Nvidia revealed that exports of its H20 AI processors and similar devices to China and other countries of concern now require a license from the US government, an order that 'addresses the risk that the covered products may be used in, or diverted to, a supercomputer in China.' In after-hours trading, Nvidia's share price dropped 6.3% to US$105.10. By Thursday's close (Friday was a holiday), it was down to $101.42, bringing its year-to-date decline to 26.7%. With no license likely to be granted, Nvidia stated that its results for the current fiscal quarter ending on April 27 'are expected to include up to approximately $5.5 billion of charges associated with H20 products for inventory, purchase commitments, and related reserves.' AMD, whose MI308 AI accelerators are subject to the same new restriction, dropped 7.1% in after-hours trading on Tuesday and finished the week down 27.5% since the beginning of the year. AMD expects to post special charges approaching $800 million. Intel's Gaudi 3 processor is also affected. For this and other reasons, Intel's share price was down 27% in the month to Thursday. Like Nvidia's H20, AMD's MI308 was designed specifically with reduced performance to meet the requirements of previous US government restrictions on exports to China. So were redesigned versions of Intel's Gaudi 2 and Gaudi 3 AI accelerators. This is the third time since October 2022 that the US Commerce Department's Bureau of Industry and Security (BIS) has put a ceiling on the performance of AI processors that can be exported to China and then, after Nvidia, AMD and Intel designed new, less efficient versions of their chips, lowered the ceiling. In this regard, President Trump is following the same policy as President Joe Biden. For Nvidia, the restriction timeline runs as such: October 2022: Biden administration blocked exports of Nvidia's A100 and H100 GPUs, then the company's most advanced AI processors. November 2022: Nvidia launched the dumbed-down A800, which met BIS requirements for sale in China. March 2023: Nvidia launched the H800, a low-performance version of the H100. November 2023: BIS blocked exports of A800 and H800 GPUs. March 2024: Nvidia launched the H20, which met the new, tighter BIS requirements. April 2025: Trump administration blocked exports of H20 GPUs. The reasons for this chain of events are, first, that Chinese demand remained strong even as the performance of the available chips declined and, second, that Chinese AI capabilities continued to advance despite the restrictions. In addition to revealing the behind-the-curve incompetence of US government analysis and response, it demonstrates that, in the case of semiconductors, anything the Chinese want to buy, the US will refuse to sell, all the while complaining about its trade deficit with China. The stunning success of Chinese AI model DeepSeek, which was trained using Nvidia H800 chips, set off another wave of McCarthyite paranoia among US politicians. As The New York Times pointed out in January, DeepSeek 'built a cheaper, competitive chatbot with fewer high-end computer chips than US behemoths like Google and OpenAI, showing the limits of chip export control.' On April 16, Chairman John Moolenaar (R-MI) and Ranking Member Raja Krishnamoorthi (D-IL) of the House Select Committee on the Chinese Communist Party released a report entitled 'DeepSeek Unmasked: Exposing the CCP's Latest Tool for Spying, Stealing, and Subverting US Export Control Restrictions.' The committee calls DeepSeek 'a serious national security threat to the United States.' Chairman Moolenaar stated that: DeepSeek isn't just another AI app — it's a weapon in the Chinese Communist Party's arsenal, designed to spy on Americans, steal our technology, and subvert US law. We now know this tool exploited US AI models and reportedly used advanced Nvidia chips that should never have ended up in CCP hands. That's why we're sending a letter to Nvidia to demand answers. American innovation should never be the engine of our adversaries' ambitions. However, only two days earlier, on April 14, Nvidia announced plans to produce up to $500 billion worth of supercomputers and other AI infrastructure in the US in the next four years. To do so, Nvidia is working with Taiwanese contract manufacturers Foxconn (Hon Hai Precision) and Wistron to build factories in the US state of Texas. The Blackwell AI processors at the heart of the AI supercomputers made there will be supplied by Taiwanese semiconductor foundry TSMC from its factories in Arizona. The chips will be assembled, packaged and tested by Amkor and SPIL (Silicon Precision Industries Co., Ltd.), also in Arizona. Amkor, as the name suggests, is Korean-American; SPIL is Taiwanese. CEO Huang said, 'The engines of the world's AI infrastructure are being built in the United States for the first time. Adding American manufacturing helps us better meet the incredible and growing demand for AI chips and supercomputers, strengthens our supply chain and boosts our resiliency.' That was exactly what Trump wanted to hear. 'It's the Trump Effect in action,' read a press release from the White House. Building AI supercomputers in the US won't be cheap and might take longer than Nvidia hopes, but with so many top-notch companies on board, it should eventually get done. At the end of March, Chinese server maker H3C reported that it was running short of Nvidia H20 processors, which appear to be nearly sold out in China. Alibaba, Tencent, ByteDance and other Nvidia customers will also be affected, indicating that the new BIS restrictions are likely to disrupt China's AI computing industry as intended. The restrictions could also cost Nvidia up to $15 billion in annual sales in addition to the $5.5 billion in anticipated special charges, with that loss translating into gains for Chinese AI chip designers Huawei and Cambricon, and H3C, which is developing its own solution. DeepSeek is already using Huawei's new Ascend 910C AI processor, the apparent most advanced Chinese alternative to Nvidia. Cambricon, which was founded in 2016, is much smaller than Huawei but has become a darling of the Chinese stock market, rising by about five times over the past year. Huawei is not publicly traded. DeepSeek has already been rolled out across China to consumers, corporations, finance and other businesses, city government, healthcare and other social services, and support services for the People's Liberation Army. Rather than having been designed 'to spy on Americans', as Congressman Moolenaar claims, it aims to provide AI solutions to practical problems across Chinese society. Meanwhile, Dylan Patel and his colleagues at the highly respected technology newsletter SemiAnalysis write that Huawei's new CloudMatrix 394 AI accelerator, which is based on the Ascend 910C, 'competes directly' with Nvidia's top-end GB200 Grace Blackwell Superchip and 'in some metrics is more advanced than Nvidia's rack scale solution.' 'Rack scale solution' refers to the complete data center system, including GPUs, servers, networking, storage, power management and cooling. In the view of SemiAnalysis, 'The engineering advantage is at the system level not just at the chip level, with innovation at the accelerator, networking, optics, and software layers… Huawei is a generation behind in chips, but its scale-up solution is arguably a generation ahead of Nvidia and AMD's current products on the market.' Huawei's solutions used more electricity, but SemiAnalysis concludes that 'The deficiencies in power are relevant but not a limiting factor in China.' Once again, US sanctions appear to be too little, too late, and more likely to promote rather than prevent the advance of Chinese technology. Unless Trump changes tack in a deal with Beijing, Nvidia and its smaller American rivals are likely to find themselves increasingly marginalized in the booming Chinese market for AI processors. Follow this writer on X: @ScottFo83517667

Inside Intel's Make-Or-Break Pivot
Inside Intel's Make-Or-Break Pivot

Yahoo

time11-04-2025

  • Business
  • Yahoo

Inside Intel's Make-Or-Break Pivot

Intel Corporation (NASDAQ:INTC) finds itself in the midst of one of the most complex transformations in the company's history. Once the cornerstone of personal computing, Intel now faces the twin challenge of overcoming years of execution missteps while reimagining itself for the age of the AI. New leader Lip-Bu Tan, who brings years of semiconductor experience to the helm, has Intel doubling down on domestic production, AI-focused computing, and full-stack foundry strategy that's intended to reduce reliance on competitors like TSMC. Yet with profitability being attacked and the competition heating up, investors are right to ask: is Intel's transformation rooted in substance or wish? Warning! GuruFocus has detected 7 Warning Signs with INTC. While the stock has stabilized in recent months, the fundamentals remain polarized. Revenue has begun to recover, product segments are picking up steam, and Intel is reaping strategic victories from the CHIPS Act. Yet earnings remain weak, cash flow remains spotty, and its core data center segment continues to slip. Here's a dive into Intel's business segments, strategic shifts, fundamentals, and valuation landscape to determine if the turnaround thesis has legsor if you should just buy the excitement. Intel's Client Computing Group (CCG) reported Q4 2024 revenue of $8.0 billion and operating income of $3.1 billion on a margin of 38.1%. That was the highest profitability by the segment in one year and came as the company gains growing traction in the segment for AI PCs. Intel's on-device effort for AI is resonating as more than 200 independent software vendors now enable more than 400 AI features. Intel expects by the end of the year in 2025 that it will have shipped more than 100 million AI PCsled by the Panther Lake architecture that launches in the second half of the year. Data Center and AI (DCAI) continues the more complex segment. Revenue came in flat at $3.4 billion, yet the operating margin fell all the way down to 6.9% as AMD's EPYC product line and Nvidia's leadership in the data center take their toll. Intel's counter measures the two Gaudi 3 AI accelerators and competing road map that leverages MRDIMM memory and enhanced Xeon platforms. Preliminary feedback from the newly-formed x86 Ecosystem Advisory Group is positive, yet Intel has work yet to do if it's going to shore up server-side market share. Network and Edge (NEX) was the standout, earning $1.6 billion in revenue on 20.9% operating marginsits highest ever. The segment benefited from demand for industrial automation and edge AI, supported by the launch of Intel's Core Ultra processors at CES. Again smaller in absolute terms, the consistent margin growth in NEX offers a template for what scalable AI-driven growth looks like across Intel's portfolio. Source: Intel's Earnings Release Intel's turnaround does not occur in isolation. AMD's expansionled by its Ryzen 9000 linehas propelled client CPU marketplace share well above 40%, up roughly 20 percentage points year-over-year. That's the direct result of Intel's blunders around its 13th and 14th Gen CPUs and the lukewarm reception for Arrow Lake-S. Intel still dominates volume based on deep OEM relationships, but the performance gap within the client and workstation markets has narrowed. In the datacenter segment, the pace-setters are still AMD and Nvidia. AMD's server penetration increased above 24% towards the end of 2024, and Nvidia's AI accelerators dominate training workloads within hyperscale. Intel continues to establish Gaudi platform momentum and depends on a broad-based x86 strategy to dominate inference and custom silicon. Intel's potential reclamation of technical leadership based on the upcoming release of Panther Lake using Intel's internal 18A process will hinge on meeting yields and performance targets. Its domestic production strength is where Intel shines. Its expanding foundry presence in the U.S. and Europepartially funded by the $7.86 billion CHIPS Act subsidymakes the company a trusted partner amid geopolitical tension. Execution there, however, is key. Foundry clients won't hesitate to switch if Intel gets quality wrong, misses deadlines, or prices inaccurately. Its break-even target in foundry operations by 2027 speaks to just how long the journey there might take. Intel's Q4 results appear strong on the page. Revenue was $14.3 billion, up by $500 million versus guidance. Non-GAAP EPS were $0.13 and gross margin sequentially grew to 42.1%. Free cash flow for the year were positive at $8.3 billionerratic versus history. GAAP net losses remained a whopping $18.8 billion in 2024 driven by write-offs, restructuring charges, and worse-than-expected results in core operations. Profitability figures are grim. Operating margin is -8.9% and net margin -35.3%. Return on equity (ROE) stands at -17.8% and Intel's return on invested capital (ROIC) remains negative. These figures reflect not only pressures in the immediate term but also the longer-term challenge of scaling monetization. DCAI continues to dilute overall profitability and foundry operationsstrategically essentialare unprofitable today. Cost discipline remains the focus. Intel has begun the process of simplifying the operations, and OpEx declined by 9% YoY in Q4. Capex remains elevated due to fab builds and EUV tooling, yet "Smart Capital" partnerships and government support offset the cost. Intel will begin deleveraging in 2025, which would improve investor confidence if accompanied by consistent margin growth. Source: Intel's Q4 Earnings Presentation Intel's valuation suggests the market is in wait-and-see mode. It trades on a forward P/E of 49.1xhigh by historical norms but artificially so as earnings are depressed. Much more insightful are Intel's price-to-sales (1.92x) and price-to-book (1.06x) levels that are at or very near all-time lows. Its EV-to-EBITDA is at 114x both due to the low base earnings as well as the patience of investors. Intel has been assigned a GF Value of $26.54 by GuruFocus and rated "fairly valued" by the GF Value Rank of 9/10. Modest upsidepossibly 1015%if Intel does execute on the road map is indicated by the valuation line. That said, the price-to-cash flow (12.6x) and the price-to-tangible-book (1.48x) indicate the skepticism the market has about Intel's ability to convert revenue into sustainable cash flow. Intel's valuation continues to depend on execution and margin turnaround with negative ROIC and little operating leverage. DCF models that assume modest 810% revenue CAGR growth, gross margin rates returning to 50%, and positive free cash flow by 2026 suggest intrinsic value in the $3035 per-share range. Any slip in 18A, foundry customer wins, or AI PC adoption would quickly narrow that range. Intel finds itself in the middle of a multi-year journey that balances cost discipline, geopolitical opportunity, and innovation. While profitability now is unimpressive and the competition fierce, investments in next-gen silicon, domestic foundry capacity, and AI hold out patient investors' long-term potential. While the stock is fairly priced today, potential upside hinges on deliveringnot promisinga return to operational greatness. This article first appeared on GuruFocus. Sign in to access your portfolio

Inside Intel's Make-Or-Break Pivot
Inside Intel's Make-Or-Break Pivot

Yahoo

time11-04-2025

  • Business
  • Yahoo

Inside Intel's Make-Or-Break Pivot

Intel Corporation (NASDAQ:INTC) finds itself in the midst of one of the most complex transformations in the company's history. Once the cornerstone of personal computing, Intel now faces the twin challenge of overcoming years of execution missteps while reimagining itself for the age of the AI. New leader Lip-Bu Tan, who brings years of semiconductor experience to the helm, has Intel doubling down on domestic production, AI-focused computing, and full-stack foundry strategy that's intended to reduce reliance on competitors like TSMC. Yet with profitability being attacked and the competition heating up, investors are right to ask: is Intel's transformation rooted in substance or wish? Warning! GuruFocus has detected 7 Warning Signs with INTC. While the stock has stabilized in recent months, the fundamentals remain polarized. Revenue has begun to recover, product segments are picking up steam, and Intel is reaping strategic victories from the CHIPS Act. Yet earnings remain weak, cash flow remains spotty, and its core data center segment continues to slip. Here's a dive into Intel's business segments, strategic shifts, fundamentals, and valuation landscape to determine if the turnaround thesis has legsor if you should just buy the excitement. Intel's Client Computing Group (CCG) reported Q4 2024 revenue of $8.0 billion and operating income of $3.1 billion on a margin of 38.1%. That was the highest profitability by the segment in one year and came as the company gains growing traction in the segment for AI PCs. Intel's on-device effort for AI is resonating as more than 200 independent software vendors now enable more than 400 AI features. Intel expects by the end of the year in 2025 that it will have shipped more than 100 million AI PCsled by the Panther Lake architecture that launches in the second half of the year. Data Center and AI (DCAI) continues the more complex segment. Revenue came in flat at $3.4 billion, yet the operating margin fell all the way down to 6.9% as AMD's EPYC product line and Nvidia's leadership in the data center take their toll. Intel's counter measures the two Gaudi 3 AI accelerators and competing road map that leverages MRDIMM memory and enhanced Xeon platforms. Preliminary feedback from the newly-formed x86 Ecosystem Advisory Group is positive, yet Intel has work yet to do if it's going to shore up server-side market share. Network and Edge (NEX) was the standout, earning $1.6 billion in revenue on 20.9% operating marginsits highest ever. The segment benefited from demand for industrial automation and edge AI, supported by the launch of Intel's Core Ultra processors at CES. Again smaller in absolute terms, the consistent margin growth in NEX offers a template for what scalable AI-driven growth looks like across Intel's portfolio. Source: Intel's Earnings Release Intel's turnaround does not occur in isolation. AMD's expansionled by its Ryzen 9000 linehas propelled client CPU marketplace share well above 40%, up roughly 20 percentage points year-over-year. That's the direct result of Intel's blunders around its 13th and 14th Gen CPUs and the lukewarm reception for Arrow Lake-S. Intel still dominates volume based on deep OEM relationships, but the performance gap within the client and workstation markets has narrowed. In the datacenter segment, the pace-setters are still AMD and Nvidia. AMD's server penetration increased above 24% towards the end of 2024, and Nvidia's AI accelerators dominate training workloads within hyperscale. Intel continues to establish Gaudi platform momentum and depends on a broad-based x86 strategy to dominate inference and custom silicon. Intel's potential reclamation of technical leadership based on the upcoming release of Panther Lake using Intel's internal 18A process will hinge on meeting yields and performance targets. Its domestic production strength is where Intel shines. Its expanding foundry presence in the U.S. and Europepartially funded by the $7.86 billion CHIPS Act subsidymakes the company a trusted partner amid geopolitical tension. Execution there, however, is key. Foundry clients won't hesitate to switch if Intel gets quality wrong, misses deadlines, or prices inaccurately. Its break-even target in foundry operations by 2027 speaks to just how long the journey there might take. Intel's Q4 results appear strong on the page. Revenue was $14.3 billion, up by $500 million versus guidance. Non-GAAP EPS were $0.13 and gross margin sequentially grew to 42.1%. Free cash flow for the year were positive at $8.3 billionerratic versus history. GAAP net losses remained a whopping $18.8 billion in 2024 driven by write-offs, restructuring charges, and worse-than-expected results in core operations. Profitability figures are grim. Operating margin is -8.9% and net margin -35.3%. Return on equity (ROE) stands at -17.8% and Intel's return on invested capital (ROIC) remains negative. These figures reflect not only pressures in the immediate term but also the longer-term challenge of scaling monetization. DCAI continues to dilute overall profitability and foundry operationsstrategically essentialare unprofitable today. Cost discipline remains the focus. Intel has begun the process of simplifying the operations, and OpEx declined by 9% YoY in Q4. Capex remains elevated due to fab builds and EUV tooling, yet "Smart Capital" partnerships and government support offset the cost. Intel will begin deleveraging in 2025, which would improve investor confidence if accompanied by consistent margin growth. Source: Intel's Q4 Earnings Presentation Intel's valuation suggests the market is in wait-and-see mode. It trades on a forward P/E of 49.1xhigh by historical norms but artificially so as earnings are depressed. Much more insightful are Intel's price-to-sales (1.92x) and price-to-book (1.06x) levels that are at or very near all-time lows. Its EV-to-EBITDA is at 114x both due to the low base earnings as well as the patience of investors. Intel has been assigned a GF Value of $26.54 by GuruFocus and rated "fairly valued" by the GF Value Rank of 9/10. Modest upsidepossibly 1015%if Intel does execute on the road map is indicated by the valuation line. That said, the price-to-cash flow (12.6x) and the price-to-tangible-book (1.48x) indicate the skepticism the market has about Intel's ability to convert revenue into sustainable cash flow. Intel's valuation continues to depend on execution and margin turnaround with negative ROIC and little operating leverage. DCF models that assume modest 810% revenue CAGR growth, gross margin rates returning to 50%, and positive free cash flow by 2026 suggest intrinsic value in the $3035 per-share range. Any slip in 18A, foundry customer wins, or AI PC adoption would quickly narrow that range. Intel finds itself in the middle of a multi-year journey that balances cost discipline, geopolitical opportunity, and innovation. While profitability now is unimpressive and the competition fierce, investments in next-gen silicon, domestic foundry capacity, and AI hold out patient investors' long-term potential. While the stock is fairly priced today, potential upside hinges on deliveringnot promisinga return to operational greatness. This article first appeared on GuruFocus.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store