logo
Dell raises annual profit forecast on strong AI server demand

Dell raises annual profit forecast on strong AI server demand

The Sun3 days ago

DELL raised its annual profit forecast on Thursday, signaling growing demand for its AI-powered servers that are equipped with Nvidia's powerful chips.
Shares of the company, whose servers are used by customers such as Elon Musk's AI startup xAI and CoreWeave, rose 2% in extended trading.
Dell and Super Micro Computer have benefited from growing demand for such servers, but the high cost of producing them and tough competition have pressured margins.
'We generated $12.1 billion in AI orders this quarter alone, surpassing the entirety of shipments in all of fiscal 2025 and leaving us with $14.4 billion in backlog,' Dell's Chief Operating Officer Jeff Clarke said.
The results follow the U.S. Department of Energy's announcement on Thursday that it would launch a new supercomputer, named Doudna, which will use Dell and Nvidia's advanced technology to perform complex computing tasks.
Dell now expects annual adjusted profit to be $9.40 per share, compared with its prior forecast of $9.30 per share. The company also reiterated its annual revenue outlook.
It forecast second-quarter revenue to be between $28.5 billion and $29.5 billion, above analysts' average estimate of $25.05 billion, according to data compiled by LSEG.
Dell's adjusted profit forecast for the second quarter of $2.25 per share was also above estimates of $2.09.
The company's first-quarter revenue of $23.38 billion beat expectations of $23.14 billion, while its adjusted profit of $1.55 per share missed estimates of $1.69.
'We note potential near-term margin pressure from competitive pricing, tariffs, and geographic mix shifts,' Shreya Gheewala, equity analyst at CFRA Research, said.
Revenue from Dell's infrastructure solutions group, which includes storage, software and server offerings, rose 12%, while revenue from its client solutions group, that houses its PC business, rose 5%.
PC refresh cycle is slower than before, but signs show users are moving to Windows 11 PCs, which include AI PCs, Clarke added.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Asian stocks rally as Trump's ‘Liberation Day' tariffs hit court hurdle
Asian stocks rally as Trump's ‘Liberation Day' tariffs hit court hurdle

Free Malaysia Today

time2 hours ago

  • Free Malaysia Today

Asian stocks rally as Trump's ‘Liberation Day' tariffs hit court hurdle

Investors reacted to the US court blocking tariffs by moving to safe havens, lifting the dollar, S&P 500, and Nasdaq futures. (AP pic) SYDNEY : Asian shares and Wall Street futures climbed in Asia on Thursday after a US federal court blocked President Donald Trump's 'Liberation Day' tariffs from going into effect, sending the dollar up on safe haven currencies. The Manhattan-based Court of International Trade ruled that Trump overstepped his authority by imposing across-the-board duties on imports from nations that sell more to the US than they buy. The White House quickly appealed the decision, and could take it all the way to the Supreme Court if needed, but in the meantime it offered some hope that Trump might back away from the highest tariff levels he had threatened. 'We're just trying to work out what it might mean basically but obviously the market is doing a knee-jerk reaction,' said Ray Attrill, head of FX strategy at NAB. 'So this may be an absolute storm in a teacup or potentially something more significant.' Investors reacted by embracing risk assets and Japan's Nikkei quickly rose 1%, while South Korea's shares added 0.8%. MSCI's broadest index of Asia-Pacific shares outside Japan edged up, awaiting the opening of Chinese markets. S&P 500 futures climbed 1.5%, while Nasdaq futures rose 1.8%. The latter had already been lifted by relief over earnings from Nvidia, which beat sales estimates. The chipmaker and AI darling also projected strong revenues for the current quarter, sending its shares up 4.4% after hours. That news helped offset a Financial Times report that the White House had ordered US firms that offer software used to design semiconductors to stop selling their services to Chinese groups. The New York Times separately reported the US had suspended some sales to China of critical US technologies, including those related to jet engines, semiconductors and certain chemicals. Pushing out rate cuts The news of the court decision hit traditional safe havens, lifting the dollar 0.8% on the Swiss franc to 0.336. It gained 0.6% on the Japanese yen to 145.76 yen, while the euro dipped 0.5% to US$1.1230. Yields on 10-year Treasuries rose 3 basis points to 4.51% and markets further shaved the chance of a Federal Reserve rate cut anytime soon. Minutes of the last Fed meeting showed 'almost all participants commented on the risk that inflation could prove to be more persistent than expected' due to Trump's tariffs. A rate cut in July is now seen as just a 22% chance, while September has come into around 60% having been more than fully priced a month ago. In commodity markets, gold slipped 0.9% to US$3,259 an ounce. Oil prices extended a rally begun on supply concerns as Opec+ agreed to leave their output policy unchanged and as the US barred Chevron from exporting Venezuelan crude.

Hey chatbot, is this true? AI's answer: not really, say fact-checkers
Hey chatbot, is this true? AI's answer: not really, say fact-checkers

Malay Mail

time3 hours ago

  • Malay Mail

Hey chatbot, is this true? AI's answer: not really, say fact-checkers

WASHINGTON, June 2 — As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification — only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots — including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini — in search of reliable information. 'Hey @Grok, is this true?' has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok — now under renewed scrutiny for inserting 'white genocide,' a far-right conspiracy theory, into unrelated queries — wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as 'likely' showing Pakistan's military response to Indian strikes. 'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,' McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. 'Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,' she warned. 'Fabricated' NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Centre for Digital Journalism at Columbia University found that chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.' When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labelled a purported video of a giant anaconda swimming in the Amazon River as 'genuine,' even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularized by X. Researchers have repeatedly questioned the effectiveness of 'Community Notes' in combating falsehoods. 'Biased answers' Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content — something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an 'unauthorized modification' for causing Grok to generate unsolicited posts referencing 'white genocide' in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 'most likely' culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people. 'We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,' Angie Holan, director of the International Fact-Checking Network, told AFP. 'I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers.' — AFP

Hey chatbot, is this true? AI ‘factchecks' sow misinformation
Hey chatbot, is this true? AI ‘factchecks' sow misinformation

The Sun

time4 hours ago

  • The Sun

Hey chatbot, is this true? AI ‘factchecks' sow misinformation

WASHINGTON: As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. 'Hey @Grok, is this true?' has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting 'white genocide,' a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as 'likely' showing Pakistan's military response to Indian strikes. 'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,' McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. 'Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,' she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.' When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as 'genuine,' even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularized by X. Researchers have repeatedly questioned the effectiveness of 'Community Notes' in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an 'unauthorized modification' for causing Grok to generate unsolicited posts referencing 'white genocide' in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 'most likely' culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people. 'We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,' Angie Holan, director of the International Fact-Checking Network, told AFP. 'I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store