Latest news with #MLPerf


Channel Post MEA
3 days ago
- Business
- Channel Post MEA
Western Digital Validates Real-World AI Storage Performance with MLPerf Storage V2 Results
Western Digital has announced its MLPerf Storage V2 submission results, validating the real-world capabilities of its OpenFlex Data24 4000 Series NVMe-oF Storage Platform. The results affirm OpenFlex Data24 EBOF's (Ethernet bunch of flash) ability to meet the rigorous demands of modern AI workloads, delivering high performance, efficiency and scalability in a cost-effective solution for modern AI infrastructure. Real-World Testing for AI at Scale Western Digital's OpenFlex Data24 NVMe-oF Storage Platform extends the high performance of NVMe flash over Ethernet fabric to enable low-latency shared storage for scalable, disaggregated AI infrastructure. Designed to simplify deployment, reduce cost and grow with GPU demand, the OpenFlex Data24 provides the capability of scaling storage and compute independently for more flexibility. To reflect realistic and demanding deployment scenarios where storage systems must keep pace with accelerated GPU infrastructure, Western Digital collaborated with PEAK:AIO, a high-performance SDS provider with the ability to ingest, stage and serve large volumes of data at high speeds. The validation submission utilized KIOXIA CM7-V Series NVMe SSDs, selected for their outstanding performance characteristics in demanding AI workloads. When deployed in the OpenFlex Data24 enclosure, they enable sustained, high-performance disaggregated data delivery to many GPU client-nodes. MLPerf Storage V2 Benchmark Results MLPerf is widely regarded as the industry's gold standard for AI benchmarking. Western Digital's MLPerf Storage V2 results showcase how this architecture not only delivers performance at scale but does so with a focus on efficiency and practical deployment economics with and without a software-defined storage (SDS) layer. MLPerf Storage uses GPU client nodes – systems that simulate the behavior of an AI server accessing storage during training or inferencing to generate the I/O load patterns typical of real-world GPU workloads – to evaluate how well a storage platform supports distributed AI environments across multiple concurrent GPU clients. The AI training tests used from the MLPerf storage suite measure how effectively the system serves AI workloads that stress different aspects of storage I/O, including throughput and concurrency, across various deep learning models. There are two key workload benchmarks used for MLPerf: 3D U-Net Workload 3D-UNet is a deep learning model used in medical imaging and volumetric segmentation. It places a much heavier load on storage systems due to its large, 3D input datasets and intensive data-streaming read patterns. As such, it is a more stringent benchmark for demonstrating sustained high-bandwidth and low-latency performance across multi-node AI workflows. In this model: Western Digital's OpenFlex Data24 achieved sustained read throughput of 106.5 GB/s (99.2 GiB/s), saturating 36 simulated H100 GPUs across three physical client nodes demonstrating the EBOF's ability to handle bandwidth-intensive, high-parallelism training tasks with ease. With the PEAK:AIO AI Data Server, OpenFlex Data24 was able to deliver 64.9 GB/s (59.6 GiB/s), saturating 22 simulated H100 GPUs from a single head server and single client node. ResNet50 Workload ResNet-50 is a widely used convolutional neural network designed for image classification. It serves as a benchmark for training throughput, representing a balanced mix of compute and data movement. With both random and sequential I/O patterns, using medium-sized image reads, it is useful in evaluating how well a system handles high-frequency access to smaller files and rapid iteration cycles. In this model: Western Digital's OpenFlex Data24 delivered optimal performance across 186 simulated H100 GPUs and three client nodes, with an outstanding GPU-to-drive ratio that reflects the platform's efficient use of physical media. With the PEAK:AIO AI Data Server, OpenFlex Data24 was able to saturate 52 simulated H100 GPUs from a single head server and single client node. 'These results validate Western Digital's disaggregated architecture as a powerful enabler and cornerstone of next-generation AI infrastructure, maximizing GPU utilization while minimizing footprint, complexity and overall total cost of ownership,' said Kurt Chan, vice president and general manager, Western Digital Platforms Business. 'The OpenFlex Data24 4000 Series NVMe-oF Storage Platform delivers near-saturation performance across demanding AI benchmarks, both standalone and with a single PEAK:AIO AI Data Server appliance, translating to faster time-to-results and reduced infrastructure sprawl.' 'These MLPerf results spotlight the breakthrough efficiency achieved by combining PEAK:AIO's software-defined AI Data Server with the scalability of Western Digital's OpenFlex Data24 and the performance density of KIOXIA's CM7-V Series SSDs,' said Roger Cummings, President and CEO at PEAK:AIO. 'Together, we're delivering high-performance AI infrastructure that's faster to deploy, more efficient to operate, and easier to scale. It's a compelling proof point that high performance no longer requires high complexity.' Whether organizations are just beginning their AI journey or scaling to hundreds of GPUs, Western Digital's OpenFlex Data24 with industry-leading connectivity using Western Digital RapidFlex network adapters enables up to 12 hosts to be attached without a switch. The data storage platform offers simplified, predictable, high-performance AI-infrastructure growth without the upfront costs or power demands of some other solutions making it ideal for organizations to scale AI workloads with confidence.
Yahoo
5 days ago
- Yahoo
UPDATE – New MLPerf Storage v2.0 Benchmark Results Demonstrate the Critical Role of Storage Performance in AI Training Systems
New checkpoint benchmarks provide 'must-have' information for optimizing AI training SAN FRANCISCO, Aug. 04, 2025 (GLOBE NEWSWIRE) -- Today, MLCommons® announced results for its industry-standard MLPerf® Storage v2.0 benchmark suite, which is designed to measure the performance of storage systems for machine learning (ML) workloads in an architecture-neutral, representative, and reproducible manner. This round of the benchmark saw dramatically increased participation, more geographic representation from submitting organizations, and greater diversity of the systems submitted for testing. The benchmark results show that storage systems performance continues to improve rapidly, with tested systems serving roughly twice the number of accelerators than in the v1.0 benchmark round. Additionally, the v2.0 benchmark adds new tests that replicate real-world checkpointing for AI training systems. The benchmark results provide essential information for stakeholders who need to configure the frequency of checkpoints to optimize for high performance – particularly at scale. Version 2.0 adds checkpointing tasks, delivers essential insights As AI training systems have continued to scale up to billions and even trillions of parameters, and the largest clusters of processors have reached one hundred thousand accelerators or more, system failures have become a prominent technical challenge. Because data centers tend to run accelerators at near-maximum utilization for their entire lifecycle, both the accelerators themselves and the supporting hardware (power supplies, memory, cooling systems, etc.) are heavily burdened, minimizing their expected lifetime. This is a chronic issue, especially in large clusters: if the mean time to failure for an accelerator is 50,000 hours, then a 100,000-accelerator cluster running for extended periods at full utilization will likely experience a failure every half-hour. A cluster with one million accelerators would expect to see a failure every three minutes. Worse, because AI training usually involves massively parallel computation where all the accelerators are moving in lockstep on the same iteration of training, a failure of one processor can grind an entire cluster to a halt. It is now broadly accepted that saving checkpoints of intermediate training results at regular intervals is essential to keep AI training systems running at high performance. The AI training community has developed mathematical models that can optimize cluster performance and utilization by trading off the overhead of regular checkpoints against the expected frequency and cost of failure recovery (rolling back the computation, restoring the most recent checkpoint, restarting the training from that point, and duplicating the lost work). Those models, however, require accurate data on the scale and performance of the storage systems that are used to implement the checkpointing system. The MLPerf Storage v2.0 checkpoint benchmark tests provide precisely that data, and the results from this round suggest that stakeholders procuring AI training systems need to carefully consider the performance of the storage systems they buy, to ensure that they can store and retrieve a cluster's checkpoints without slowing the system down to an unacceptable level. For a deeper understanding of the issues around storage systems and checkpointing, as well as of the design of the checkpointing benchmarks, we encourage you to read this post from Wes Vaske, a member of the MLPerf Storage working group. 'At the scale of computation being implemented for training large AI models, regular component failures are simply a fact of life,' said Curtis Anderson, MLPerf Storage working group co-chair. 'Checkpointing is now a standard practice in these systems to mitigate failures, and we are proud to be providing critical benchmark data on storage systems to allow stakeholders to optimize their training performance. This initial round of checkpoint benchmark results shows us that current storage systems offer a wide range of performance specifications, and not all systems are well-matched to every checkpointing scenario. It also highlights the critical role of software frameworks such as PyTorch and TensorFlow in coordinating training, checkpointing, and failure recovery, as well as some opportunities for enhancing those frameworks to further improve overall system performance.' Workload benchmarks show rapid innovation in support of larger-scale training systems Continuing from the v1.0 benchmark suite, the v2.0 suite measures storage performance in a diverse set of ML training scenarios. It emulates the storage demands across several scenarios and system configurations covering a range of accelerators, models, and workloads. By simulating the accelerators' 'think time' the benchmark can generate accurate storage patterns without the need to run the actual training, making it more accessible to all. The benchmark focuses the test on a given storage system's ability to keep pace, as it requires the simulated accelerators to maintain a required level of utilization. The v2.0 results show that submitted storage systems have substantially increased the number of accelerators they can simultaneously support, roughly twice the number compared to the systems in the v1.0 benchmark. 'Everything is scaling up: models, parameters, training datasets, clusters, and accelerators. It's no surprise to see that storage system providers are innovating to support ever larger scale systems,' said Oana Balmau, MLPerf Storage working group co-chair. The v2.0 submissions also included a much more diverse set of technical approaches to delivering high-performance storage for AI training, including: 6 local storage solutions; 2 solutions using in-storage accelerators; 13 software-defined solutions; 12 block systems; 16 on-prem shared storage solutions; 2 object stores. 'Necessity continues to be the mother of invention: faced with the need to deliver storage solutions that are both high-performance and at unprecedented scale, the technical community has stepped up once again and is innovating at a furious pace,' said Balmau. MLPerf Storage v2.0: skyrocketing participation and diversity of submitters The MLPerf Storage benchmark was created through a collaborative engineering process by 35 leading storage solution providers and academic research groups across 3 years. The open-source and peer-reviewed benchmark suite offers a level playing field for competition that drives innovation, performance, and energy efficiency for the entire industry. It also provides critical technical information for customers who are procuring and tuning AI training systems. The v2.0 benchmark results, from a broad set of technology providers, reflect the industry's recognition of the importance of high-performance storage solutions. MLPerf Storage v2.0 includes >200 performance results from 26 submitting organizations: Alluxio, Argonne National Lab, DDN, ExponTech, FarmGPU, H3C, Hammerspace, HPE, JNIST/Huawei, Juicedata, Kingston, KIOXIA, Lightbits Labs, MangoBoost, Micron, Nutanix, Oracle, Quanta Computer, Samsung, Sandisk, Simplyblock, TTA, UBIX, IBM, WDC, and YanRong. The submitters represent seven different countries, demonstrating the value of the MLPerf Storage benchmark to the global community of stakeholders. 'The MLPerf Storage benchmark has set new records for an MLPerf benchmark, both for the number of organizations participating and the total number of submissions,' said David Kanter, Head of MLPerf at MLCommons. The AI community clearly sees the importance of our work in publishing accurate, reliable, unbiased performance data on storage systems, and it has stepped up globally to be a part of it. I would especially like to welcome first-time submitters Alluxio, ExponTech, FarmGPU, H3C, Kingston, KIOXIA, Oracle, Quanta Cloud Technology, Samsung, Sandisk, TTA, UBIX, IBM, and WDC.' 'This level of participation is a game-changer for benchmarking: it enables us to openly publish more accurate and more representative data on real-world systems,' Kanter continued. "That, in turn, gives the stakeholders on the front lines the information and tools they need to succeed at their jobs. The checkpoint benchmark results are an excellent case in point: now that we can measure checkpoint performance, we can think about optimizing it.' We invite stakeholders to join the MLPerf Storage working group and help us continue to evolve the benchmark suite. View the Results To view the results for MLPerf Storage v2.0, please visit the Storage benchmark results. About MLCommons MLCommons is the world's leader in AI benchmarking. An open engineering consortium supported by over 125 members and affiliates, MLCommons has a proven record of bringing together academia, industry, and civil society to measure and improve AI. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. Since then, MLCommons has continued using collective engineering to build the benchmarks and metrics required for better AI – ultimately helping to evaluate and improve AI technologies' accuracy, safety, speed, and efficiency. For additional information on MLCommons and details on becoming a member, please visit or email participation@ in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
21-07-2025
- Business
- Yahoo
Intel Gears Up to Report Q2 Earnings: Should You Buy the Stock?
Intel Corporation INTC is scheduled to report its second-quarter 2025 earnings on July 24, after the closing bell. The Zacks Consensus Estimate for sales and earnings is pegged at $11.87 billion and a penny per share, respectively. Over the past 60 days, estimates for INTC have declined 3.45% to 28 cents for 2025 and decreased 4% to 72 cents for 2026. Image Source: Zacks Investment Research Earnings Surprise History The leading semiconductor manufacturer delivered a four-quarter earnings surprise of negative 76.25%, on average, beating estimates only once. In the last reported quarter, the company's earnings surprise was 1200.00%. Image Source: Zacks Investment Research Earnings Whispers Our proven model does not predict a likely earnings beat for Intel for the second quarter. The combination of a positive Earnings ESP and a Zacks Rank #1 (Strong Buy), 2 (Buy) or 3 (Hold) increases the chances of an earnings beat. This is exactly the case here. You can uncover the best stocks to buy or sell before they're reported with our Earnings ESP currently has an ESP of -350.00% with a Zacks Rank #2. You can see the complete list of today's Zacks #1 Rank stocks here. Factors Shaping the Upcoming Results During the quarter, Intel has successfully achieved industry-first full NPU (Neural Processing Unit) compliance in the MLPerf Client v0.6 benchmark. MLPerf is an industry-standard benchmarking suite created to assess AI system performance. The newly released MLPerf Client v0.6 Benchmark is a subset of MLPerf designed to evaluate client devices such as laptops and PCs with a strong emphasis on large language model acceleration and NPU performance. During the process, Intel Core Ultra Series 2 processors showcased the fastest NPU response time with an impressive token latency and highest NPU throughput. This is likely to boost prospects in the emerging market of AI the quarter under review, Intel expanded its collaboration with original equipment manufacturers like HP and Lenovo to develop next-generation AI PC. HP's recent lineup of cutting-edge AI PCs, including EliteBook X, EliteBook Ultra and EliteBook 8, is powered by Intel Core Ultra series processors. Lenovo also deployed the Intel Core Ultra processor to power its leading-edge AI PC ThinkBook Plus Gen 6 Rollable. The company is also witnessing healthy demand for its Xeon 6 processor in high-performance computing ('HPC') and AI-driven workloads. The processor offers significantly faster memory performance at high-capacity configurations compared with its competitor Advanced Micro Devices' AMD EPYC processor. Intel also expanded its Arc GPU lineup to deliver an impressive AI experience for PC, workstation and edge use cases. These factors are expected to have a favorable impact on second-quarter earnings. However, Intel's growing ambition in GPU space is threatened by NVIDIA Corporation NVDA's strong presence in the GPU domain across AI, cloud and data center applications. In the to-be-reported quarter, Intel announced that it has inked a definitive agreement with Silver Lake to sell 51% of the Altera business. It will significantly bolster Intel's liquidity position and drive investment in growth initiatives. INTC's Price Performance Over the past year, Intel has lost 30.8% against the industry's growth of 33.6%. The company has also underperformed its peers like NVIDIA and AMD. NVIDIA has surged 39.6%, while AMD has increased 0.7% during this period. Image Source: Zacks Investment Research Key Valuation Metric for Intel From a valuation standpoint, Intel appears to be relatively cheaper than the industry and below its mean. Going by the price/sales ratio, the company shares currently trade at 1.94 forward sales, lower than 15.78 for the industry. Image Source: Zacks Investment Research Investment Consideration for INTC Intel has been undertaking several strategic decisions to gain a firmer footing in the expansive AI sector, spanning cloud and enterprise servers to networks, volume clients and ubiquitous edge environments, in tune with the evolving market dynamics. AI has moved from a niche capability to a critical must-have component for businesses. Organizations across sectors are rapidly moving to integrate AI to boost productivity and streamline workflow across operations. Intel is well-positioned to gain from this AI at its core, Intel is swiftly expanding its portfolio offerings to support a wide range of use cases such as IT applications, gaming, content development and more. Intel's collaboration with major manufacturers such as HP, Dell, Lenovo and Microsoft augurs well for long-term growth. The company has been undertaking several strategic restructuring initiatives to trim operating costs and boost liquidity. It is winding up the automotive business to focus more on its core operations. This is expected to free up significant resources available for R&D funding in the core PC and data center segments. This bodes well for long-term growth. End Note Growing prowess in the AI PC market and strategic partnership with major manufacturers are expected to be major growth drivers in the upcoming quarters. The company has made significant strides in its cost-cutting plan to rebuild a sustainable growth engine. Strategic divestiture for focus on primary growth engines and liquidity improvement is a positive. Strong focus on innovation and enhancing portfolio strength are tailwinds. With a Zacks Rank #1, Intel appears to be primed for further stock price appreciation. Intel's cheaper valuation could be a favorable entry point for investors ahead of the second quarter earnings. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Intel Corporation (INTC) : Free Stock Analysis Report Advanced Micro Devices, Inc. (AMD) : Free Stock Analysis Report NVIDIA Corporation (NVDA) : Free Stock Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research
Yahoo
25-06-2025
- Business
- Yahoo
CoreWeave Vs Nvidia: Which AI Stock is the Better Investment
Backed by Nvidia NVDA, CoreWeave CRWV stock has skyrocketed more than +300% since launching its IPO in late March, as investor confidence has swooned for the AI cloud infrastructure company. To that point, CoreWeave stock is trading over $170 a share, having an asking price that tops Nvidia shares at around $146. This certainly begs the question of whether the hype for CoreWeave stock is overdone or if the company is potentially a better AI investment than chip giant Nvidia. Image Source: Zacks Investment Research Reshaping the AI infrastructure landscape, CoreWeave has become Nvidia's top GPU cloud partner, ahead of traditional hyperscalers like Amazon AMZN, Microsoft MSFT, and Alphabet GOOGL. Having expertise in cutting-edge cloud services optimized for AI workloads, CoreWeave gained early access to Nvidia's high-performance GPUs, including the much-coveted Blackwell chips. Furthermore, CoreWeave has helped Nvidia's much sought-after AI chips build massive AI clusters that broke MLPerf training records, a widely recognized benchmarking suite designed to measure the performance of machine learning hardware, software, and services. It's noteworthy that MLPerf Inference evaluates how quickly and efficiently systems can make predictions using trained models in real-world scenarios like object detection, medical imaging, and generative AI usage. Thanks to its successful partnership with Nvidia, CoreWeave has attracted major clients including OpenAI, Meta Platforms META, and Microsoft. Notably, Microsoft accounted for 62% of CoreWeave's revenue in 2024. Being CoreWeave's major GPU supplier and an early investor, it's safe to say that Nvidia has earned significant revenue from the partnership and the appreciation of its equity stake of over 24 million CRWV shares. Pinpointing the market's high sentiment for CoreWeave and alluding to lucrative earnings potential is the company's rapid top-line expansion. CoreWeave's total sales are expected to skyrocket 164% this year to $5.02 billion compared to $1.9 billion in 2024. Zacks' projections call for CoreWeave's sales to soar another 127% next year to $11.41 billion. Attributed to the AI boom, this type of growth has warranted investors to pull CoreWeave into Nvidia's stratosphere, as Nvidia's top line has expanded over 680% in the last five years from sales of $16.67 billion in its fiscal 2021 to $130.5 billion last year. Nvidia's sales are currently projected to increase 51% in its current fiscal year 2026 and are projected to leap another 25% in FY27 to $247.24 billion. Image Source: Zacks Investment Research Although CoreWeave is not expected to be profitable yet after being founded in 2017, it's still imperative to pay attention to the trend of earnings estimate revisions (EPS). Unfortunately, EPS revisions for fiscal 2025 are noticeably down over the last 60 days from estimates that called for an adjusted loss of -$0.37 a share to -$1.30. More concerning, CoreWeave's FY26 EPS estimates have dipped to -$0.17 a share from projections that called for the company to break even two months ago. Image Source: Zacks Investment Research As for Nvidia, EPS estimates for its FY26 and FY27 are nicely up in the last 30 days, rising 1% and 3% respectively. Known for efficient operational performance, Nvidia's annual earnings are now slated to spike 42% in its FY26 and are projected to climb another 32% in FY27 to $5.60 per share. Image Source: Zacks Investment Research CoreWeave and Nvidia have built a powerhouse AI partnership that should benefit and complement each other for the foreseeable future. That said, the rally in CoreWeave stock does look overdone considering the decline in EPS revisions, making it an ideal time to take profits in CRWV. For now, Nvidia appears to be the better AI investment, with the obvious reasons being its track record of efficiency and productivity. Plus, the trend of rising EPS estimates for NVDA is not overwhelming but does allude to the notion that investors could still be rewarded for holding the chip giant's stock, although there may still be better buying opportunities ahead. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report NVIDIA Corporation (NVDA) : Free Stock Analysis Report CoreWeave Inc. (CRWV) : Free Stock Analysis Report Inc. (AMZN) : Free Stock Analysis Report Microsoft Corporation (MSFT) : Free Stock Analysis Report Alphabet Inc. (GOOGL) : Free Stock Analysis Report Meta Platforms, Inc. (META) : Free Stock Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
21-06-2025
- Business
- Yahoo
CoreWeave Just Revealed the Largest-Ever Nvidia Blackwell GPU Cluster. Should You Buy CRWV Stock?
CoreWeave (CRWV) has steadily built a name for delivering a purpose-built cloud platform tailored to handle the heavy lifting of large-scale AI workloads with unwavering performance and reliability. As evidence of this, in early June, Coreweave announced record-shattering MLPerf Training v5.0 results using Nvidia's (NVDA) powerful GB200 Grace Blackwell chips. The 'Golden Era' for Tesla Starts June 22. Should You Buy TSLA Stock First? Wall Street Says Supermicro Stock Could Gain 60% in a Year 3 Highly-Rated Dividend Stocks You've Probably Never Heard Of (But Should) Stop Missing Market Moves: Get the FREE Barchart Brief – your midday dose of stock movers, trending sectors, and actionable trade ideas, delivered right to your inbox. Sign Up Now! A total of 2,496 Blackwell GPUs ran on CoreWeave's AI-optimized cloud, forming the largest-ever GB200 NVL72 cluster benchmarked under MLPerf. That figure stood 34 times larger than the only other cloud provider's submission, sending a strong message to the market about CoreWeave's scalability and dominance. In a space where performance speaks, these MLPerf results reinforce CoreWeave's standing as a serious force behind the infrastructure powering today's most demanding AI breakthroughs. Nestled in Livingston, New Jersey, CoreWeave (CRWV) is transforming the cloud computing world with a market cap now standing at $81.6 billion. From GPU and CPU compute to robust storage, high-speed networking, managed services, and servers, the company covers the entire spectrum of modern cloud needs. After making its public debut in March, CoreWeave has turned heads on Wall Street. Its shares have skyrocketed by nearly 112% in just one month, with a 13.6% leap in the past five days alone. This kind of performance signals conviction, both from the company and the market. At present, CRWV trades at 29.2 times sales, a figure that sits well above the broader industry average. While that premium might raise eyebrows, it also speaks volumes about investor belief in the firm's role as a frontrunner in AI infrastructure. On May 14, CoreWeave reported its Q1 2025 earnings. Revenue surged 420.3% year over year to $981.6 million, outpacing Wall Street's estimate of $852.3 million. Adjusted operating income climbed 549.6% to $162.6 million, while adjusted EBITDA also saw a 479.8% jump from the prior year's quarter, landing at $606.1 million. But the path to scale came with its setbacks. Adjusted net loss rose 534.8% to $149.6 million. Meanwhile, net loss per share widened 140.3% to $1.49, far higher than the $0.16 forecast by analysts. Still, the balance sheet showed strength, with total current assets increasing to $3.1 billion by quarter-end, up from $1.9 billion on Dec 31, 2024. Also, in a strong show of support, Nvidia has raised its post-IPO stake in CoreWeave to 7%. This vote of confidence from the chip giant has given investors reason to stay bullish, signaling belief in CoreWeave's role in the rapidly expanding AI infrastructure landscape. Management has also painted a bold picture for the road ahead. It has guided Q2 revenue between $1.06 billion and $1.1 billion, with adjusted operating income expected to range from $140 million to $170 million. CapEx for Q2 is projected between $3 billion and $3.5 billion, reflecting a strategy to accelerate platform investments and meet growing demand. Furthermore, CoreWeave expects full-year 2025 revenue between $4.9 billion and $5.1 billion, adjusted operating income of $800 million to $830 million, and CapEx ranging from $20 billion to $23 billion. The outlook accounts for the March contract with OpenAI, the $4 billion expansion with a large AI enterprise, and the added impact of Weights & Biases. Analysts expect the Q2 2025 loss per share to widen 100.1% year over year to $0.49. For the full fiscal year, loss per share is forecast to widen again by 100.2% to $2.14. However, 2026 could mark a turning point, with loss per share expected to narrow 73.8% to $0.56. While the company remains in the red for now, its aggressive positioning and deep entrenchment in the AI ecosystem suggest that CoreWeave could emerge as one of the biggest beneficiaries of the next wave of technological transformation. CRWV continues to hold its ground in the market with notable conviction, earning a 'Moderate Buy' consensus. Among the 19 analysts tracking the stock, five are all in with a 'Strong Buy,' one sides with a 'Moderate Buy,' while 12 are treading carefully with a 'Hold.' Only one voices a 'Strong Sell,' underscoring a cautious yet constructive stance. What is particularly striking is CRWV's current trading price, which is just 9% below its Street-high target of $185. On the date of publication, Aanchal Sugandh did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data