Baidu Q1 Earnings Preview: What to Expect From Upcoming Report
May 20 - The Chinese tech giant Baidu (NASDAQ:BIDU) is set to unveil its first-quarter 2025 results on Wednesday before the markets open.
Analysts expect net income to fall about 31% year-on-year to $1.99 per share and revenue to slip roughly 2% to $4.29 billion. Year-to-date, BIDU stock has climbed about 8%.
Warning! GuruFocus has detected 3 Warning Signs with BIDU.
Macquarie's Ellie Jiang trimmed her price target to $83 from $85 and kept a Hold rating, saying AI monetization may take time amid stiff competition and a soft macro backdrop. She forecasts AI Cloud revenue will grow around 25% to RMB 5.9 billion, partially offsetting ad-business weakness.
Citi's Alicia Yap reiterated a Buy rating with a $139 target, highlighting Baidu's recent AI product launchesErnie 4.5 Turbo and Ernie X1 Turbo, and their cost-efficiency gains.
Options traders are bracing for a roughly 6% share move in either direction following the report, based on at-the-money straddle prices.
Based on the one year price targets offered by 31 analysts, the average target price for Baidu Inc is $113.43 with a high estimate of $207.96 and a low estimate of $75.98. The average target implies a upside of +27.10% from the current price of $89.25.
Based on GuruFocus estimates, the estimated GF Value for Baidu Inc in one year is $119.92, suggesting a upside of +34.36% from the current price of $89.25. For deeper insights, visit the Baidu Forecast page.
This article first appeared on GuruFocus.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Upturn
17 minutes ago
- Business Upturn
Backblaze Investor News: If You Have Suffered Losses in Backblaze, Inc. (NASDAQ: BLZE), You Are Encouraged to Contact The Rosen Law Firm About Your Rights
NEW YORK, June 15, 2025 (GLOBE NEWSWIRE) — WHY: Rosen Law Firm, a global investor rights law firm, announces an investigation of potential securities claims on behalf of shareholders of Backblaze, Inc. (NASDAQ: BLZE) resulting from allegations that Backblaze may have issued materially misleading business information to the investing public. SO WHAT: If you purchased Backblaze securities you may be entitled to compensation without payment of any out of pocket fees or costs through a contingency fee arrangement. The Rosen Law Firm is preparing a class action seeking recovery of investor losses. WHAT TO DO NEXT: To join the prospective class action, go to or call Phillip Kim, Esq. toll-free at 866-767-3653 or email [email protected] for information on the class action. WHAT IS THIS ABOUT: On April 24, 2024, during market hours, issued an article entitled, 'Backblaze stock plunges amid Morpheus Research report.' This article stated that Backblaze 'saw its shares plummet' as a result of a 'scathing short report from Morpheus Research. The report detailed a series of alleged financial missteps and questionable practices since the company's initial public offering (IPO) in November 2021.' The article further noted that Morpheus's report 'highlights questionable accounting practices, including financial manipulations and inflated forecasts to pass audit thresholds.' On this news, Backblaze stock fell 2.1% on April 24, 2025. WHY ROSEN LAW: We encourage investors to select qualified counsel with a track record of success in leadership roles. Often, firms issuing notices do not have comparable experience, resources, or any meaningful peer recognition. Many of these firms do not actually litigate securities class actions. Be wise in selecting counsel. The Rosen Law Firm represents investors throughout the globe, concentrating its practice in securities class actions and shareholder derivative litigation. Rosen Law Firm achieved the largest ever securities class action settlement against a Chinese Company at the time. Rosen Law Firm was Ranked No. 1 by ISS Securities Class Action Services for number of securities class action settlements in 2017. The firm has been ranked in the top 4 each year since 2013 and has recovered hundreds of millions of dollars for investors. In 2019 alone the firm secured over $438 million for investors. In 2020, founding partner Laurence Rosen was named by law360 as a Titan of Plaintiffs' Bar. Many of the firm's attorneys have been recognized by Lawdragon and Super Lawyers. Follow us for updates on LinkedIn: on Twitter: or on Facebook: Attorney Advertising. Prior results do not guarantee a similar outcome. ——————————- Contact Information: Laurence Rosen, Esq. Phillip Kim, Esq. The Rosen Law Firm, P.A. 275 Madison Avenue, 40th Floor New York, NY 10016 Tel: (212) 686-1060 Toll Free: (866) 767-3653 Fax: (212) 202-3827 [email protected]
Yahoo
29 minutes ago
- Yahoo
Kioxia preps XL-Flash SSD that's 3x faster than any SSD available — 10 million IOPS drive has peer-to-peer GPU connectivity for AI servers
When you buy through links on our articles, Future and its syndication partners may earn a commission. Kioxia aims to change the storage paradigm with a proposed SSD designed to surpass 10 million input/output operations per second (IOPS) in small-block workloads, the company revealed at its Corporate Strategy Meeting earlier this week. That's three times faster than the peak speeds of many modern SSDs. One of the performance bottlenecks of modern AI servers is the data transfer between storage and GPUs, as data is currently transferred by the CPU, which significantly increases latencies and extends access times. To reach the performance target, Kioxia is designing a new controller specifically tuned to maximize IOPS — beyond 10M 512B IOPS — to enable GPUs to access data at speeds sufficient to keep their cores 100% used at all times. The proposed Kioxia 'AI SSD' is set to utilize the company's single-level cell (SLC) XL-Flash memory, which boasts read latencies in the range of 3 to 5 microseconds, significantly lower than the read latencies of 40 to 100 microseconds offered by SSDs based on conventional 3D NAND. Additionally, by storing one bit per cell, SLC offers faster access times and greater endurance, attributes that are crucial for demanding AI workloads. Current high-end datacenter SSDs typically achieve 2 to 3 million IOPS for both 4K and 512-byte random read operations. From a bandwidth perspective, using 4K blocks makes a lot of sense, whereas 512B blocks do not. However, large language models (LLMs) and retrieval-augmented generation (RAG) systems typically perform small, random accesses to fetch embeddings, parameters, or knowledge base entries. In these scenarios, small block sizes, such as 512B, are more representative of actual application behavior than 4K or larger blocks. Therefore, it makes more sense to use 512B blocks to meet the needs of LLMs and RAGs in terms of latencies and use multiple drives for bandwidth. Using smaller blocks could also enable more efficient use of memory semantics for access. It is noteworthy that Kioxia does not disclose which host interface its 'AI SSD' will use, although it does not appear to require a PCIe 6.0 interface from a bandwidth perspective. The 'AI SSD' from Kioxia will also be optimized for peer-to-peer communications between the GPU and SSD, bypassing the CPU for extra performance and lower latency. To that end, there is another reason why Kioxia (well, and Nvidia) plan to use 512B blocks as GPUs typically operate on cache lines of 32, 64, or 128 bytes internally and their memory subsystems are optimized for burst access to many small, independent memory locations, to keep all the stream processors busy at all times. To that end, 512-byte reads align better with GPU designs. Kioxia's 'AI SSD' is designed to support AI training setups where large language models (LLMs) require fast, repeated access to massive datasets. Also, Kioxia envisions it being deployed in AI inference applications, particularly in systems that employ retrieval-augmented generation techniques to enhance generative AI outputs with real-time data (i.e., for reasoning). Low-latency, high-bandwidth storage access is crucial for such machines to ensure both low response times and efficient GPU utilization. The Kioxia 'AI SSD' is scheduled for release in the second half of 2026. Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.
Yahoo
29 minutes ago
- Yahoo
SMI CEO claims Nvidia wants SSDs with 100 million IOPS — up to 33X performance uplift could eliminate AI GPU bottlenecks
When you buy through links on our articles, Future and its syndication partners may earn a commission. Now that the AI industry has exceptionally high-performance GPUs with high-bandwidth memory (HBM), one of the bottlenecks that AI training and inference systems face is storage performance. To that end, Nvidia is working with partners to build SSDs that can hit random read performance of 100 million input/output operations per second (IOPS) in small-block workloads, according to Wallace C. Kuo, who spoke with Tom's Hardware in an exclusive interview. "Right now, they are aiming for 100 million IOPS — which is huge," Kuo told Tom's Hardware. Modern AI accelerators, such as Nvidia's B200, feature HBM3E memory bandwidth of around 8 TB/s, which significantly exceeds the capabilities of modern storage subsystems in both overall throughput and latency. Modern PCIe 5.0 x4 SSDs top at around 14.5 GB/s and deliver 2 – 3 million IOPS for both 4K and 512B random reads. Although 4K blocks are better suited for bandwidth, AI models typically perform small, random fetches, which makes 512B blocks a better fit for their latency-sensitive patterns. However, increasing the number of I/O operations per second by 33 times is hard, given the limitations of both SSD controllers and NAND memory. In fact, Kioxia is already working on an 'AI SSD' based on its XL-Flash memory designed to surpass 10 million 512K IOPS. The company currently plans to release this drive during the second half of next year, possibly to coincide with the rollout of Nvidia's Vera Rubin platform. To get to 100 million IOPS, one might use multiple 'AI SSDs.' However, the head of SMI believes that achieving 100 million IOPS on a single drive featuring conventional NAND with decent cost and power consumption will be extremely hard, so a new type of memory might be needed. "I believe they are looking for a media change," said Kuo. "Optane was supposed to be the ideal solution, but it is gone now. Kioxia is trying to bring XL-NAND and improve its performance. SanDisk is trying to introduce High Bandwidth Flash (HBF), but honestly, I don't really believe in it. Right now, everyone is promoting their own technology, but the industry really needs something fundamentally new. Otherwise, it will be very hard to achieve 100 million IOPS and still be cost-effective." Currently, many companies, including Micron and SanDisk, are developing new types of non-volatile memory. However, when these new types of memory will be commercially viable is something that even the head of Silicon Motion is not sure about. Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.