logo
Chipmaker Micron's shares slump as tepid margin forecast eclipses AI prospects

Chipmaker Micron's shares slump as tepid margin forecast eclipses AI prospects

Reuters21-03-2025

March 21 (Reuters) - Micron's (MU.O), opens new tab shares fell 8% on Friday, as its dour margin forecast took the shine off a robust quarterly revenue outlook driven by demand for its semiconductors used in artificial intelligence tasks.
Micron, one of only three providers of high-bandwidth memory (HBM) chips for data-intensive generative AI applications, forecast adjusted gross margin below estimates on Thursday, as lower pricing for consumer memory chips hits profitability.
After a 1.4% drop in 2024, Micron's shares have gained more than 13% this year as investors bet on improving consumer memory chip pricing and expected the company to benefit from its essential position in AI supply chains.
"NAND Flash oversupply remains a drag on margins," Rosenblatt analysts said in a note on Friday, referring to a type of memory chip used in consumer electronics like smartphones and personal computers.
Soft end-market demand and aggressive buying by electronics suppliers during the pandemic led to oversupply of the consumer memory chips, resulting in weaker pricing.
Micron forecast third-quarter adjusted gross margin of about 36.5%, marginally below analysts' average estimate of 36.9%, according to data compiled by LSEG. The forecast represents a sequential drop of 3 percentage points.
"There has been a challenging industry environment in NAND," Micron's chief business officer Sumit Sadana said on Thursday during a post-earnings call.
Micron has been cutting back on NAND production, resulting in underutilization which spreads fixed costs over a smaller output, hurting margins.
AI DEMAND PERSISTS
The ramp-up of HBM production to meet strong demand for its AI memory chips from GPU market leaders like Nvidia (NVDA.O), opens new tab, has also pressured margins.
Micron forecast third-quarter revenue above estimates, owing to AI-linked strength.
"We see high-bandwidth memory as a key growth driver," Morningstar analysts said, adding that they expect "continued AI and data center demand."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Campainers urge UK watchdog to limit use of AI after report of Meta's plan to automate checks
Campainers urge UK watchdog to limit use of AI after report of Meta's plan to automate checks

The Guardian

time4 hours ago

  • The Guardian

Campainers urge UK watchdog to limit use of AI after report of Meta's plan to automate checks

Internet safety campaigners have urged the UK's communications watchdog to limit the use of artificial intelligence in crucial risk assessments following a report that Mark Zuckerberg's Meta was planning to automate checks. Ofcom said it was 'considering the concerns' raised by the letter following a report last month that up to 90% of all risk assessments at the owner of Facebook, Instagram and WhatsApp would soon be carried out by AI. Social media platforms are required under the UK's Online Safety Act to gauge how harm could take place on their services and how they plan to mitigate those potential harms – with a particular focus on protecting child users and preventing illegal content from appearing. The risk assessment process is viewed as key aspect of the act. In a letter to Ofcom's chief executive, Dame Melanie Dawes, organisations including the Molly Rose Foundation, the NSPCC and the Internet Watch Foundation described the prospect of AI-driven risk assessments as a 'retrograde and highly alarming step'. 'We urge you to publicly assert that risk assessments will not normally be considered as 'suitable and sufficient', the standard required by … the Act, where these have been wholly or predominantly produced through automation.' The letter also urged the watchdog to 'challenge any assumption that platforms can choose to water down their risk assessment processes'. A spokesperson for Ofcom said: 'We've been clear that services should tell us who completed, reviewed and approved their risk assessment. We are considering the concerns raised in this letter and will respond in due course.' Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion Meta said the letter deliberately misstated the company's approach on safety and it was committed to high standards and complying with regulations. 'We are not using AI to make decisions about risk,' said a Meta spokesperson. 'Rather, our experts built a tool that helps teams identify when legal and policy requirements apply to specific products. We use technology, overseen by humans, to improve our ability to manage harmful content and our technological advancements have significantly improved safety outcomes.' The Molly Rose Foundation organised the letter after NPR, a US broadcaster, reported last month that updates to Meta's algorithms and new safety features will mostly be approved by an AI system and no longer scrutinised by staffers. According to one former Meta executive, who spoke to NPR anonymously, the change will allow the company to launch app updates and features on Facebook, Instagram and WhatsApp more quickly but would create 'higher risks' for users, because potential problems are less likely to be prevented before a new product is released to the public. NPR also reported that Meta was considering automating reviews for sensitive areas including youth risk and monitoring the spread of falsehoods.

Minister says AI ‘does lie' but defends Government amid copyright row
Minister says AI ‘does lie' but defends Government amid copyright row

South Wales Guardian

time4 hours ago

  • South Wales Guardian

Minister says AI ‘does lie' but defends Government amid copyright row

Peter Kyle acknowledged the technology was 'not flawless' as he insisted the Government would 'never sell downstream' the rights of artists in the UK. He also said he had 'mistakenly' said his preferred option on AI and copyright was requiring rights-holders to 'opt out' of their material being used by tech companies, and had since 'gone back to the drawing board'. Ministers have faced a backlash from major figures in the creative industries over their approach to copyright, with Sir Elton John this week describing the situation as an 'existential issue.' The Government is locked in a standoff with the House of Lords, which has demanded artists to be offered immediate copyright protection as an amendment to the Data (Use and Access) Bill. Peers have attempted to change the legislation by adding a commitment to introduce transparency requirements aimed at ensuring rights-holders are able to see when their work has been used and by whom. Asked about the risk of AI producing unreliable information, Mr Kyle said 'people need to understand that AI is not flawless, and that AI does lie because it's based on human characteristics'. 'Now it is getting more precise as we move forward. It's getting more powerful as we move forward,' he told Sky News's Sunday Morning With Trevor Phillips. 'But as with every single technology that comes into society, you can only safely use it and wisely use it by understanding how it works.' He added: 'We are going to legislate for AI going forward and we're going to balance it with the same legislation that we'll bring in to modernise the copyright legislation as well.' The Government has said it will address copyright issues as a whole after the more than 11,500 responses to its consultation on the impact of AI have been reviewed, rather than in what it has branded 'piecemeal' legislation. Among the proposals had been a suggestion that tech companies could be given free access to British music, films, books in order to train AI models without permission or payment, with artists required to 'opt-out' if they do not want their work to be used. Asked about the prospect of an opt-out clause, Mr Kyle told the BBC's Sunday With Laura Kuenssberg programme: 'I always had on the table from the outset an opt-out clause. 'But I mistakenly said this was my preferred option that had more prominence than perhaps some of the creatives wanted it to have, and I've now sort of gone back to the drawing board on that, because I am listening to what people want.' Last month hundreds of stars including Sir Elton, Sir Paul McCartney and Kate Bush signed a joint letter to Sir Keir Starmer urging the Prime Minister to introduce safeguards against work being plundered for free.

US attacks on science and research a ‘great gift' to China on artificial intelligence, former OpenAI board member says
US attacks on science and research a ‘great gift' to China on artificial intelligence, former OpenAI board member says

The Guardian

time5 hours ago

  • The Guardian

US attacks on science and research a ‘great gift' to China on artificial intelligence, former OpenAI board member says

The US administration's targeting of academic research and international students is a 'great gift' to China in the race to compete on artificial intelligence, former OpenAI board member Helen Toner has said. The director of strategy at Georgetown's Center for Security and Emerging Technology (CSET) joined the board of OpenAI in 2021 after a career studying AI and the relationship between the United States and China. Toner, a 33-year-old University of Melbourne graduate, was on the board for two years until a falling out with founder Sam Altman in 2023. Altman was fired by the board over claims that he was not 'consistently candid' in his communications and the board did not have confidence in Altman's ability to lead. The chaotic months that followed saw Altman fired and then re-hired with three members of the board, including Toner, ousted instead. They will soon also be the subject of a planned film, with the director of Challengers and Call Me By Your Name, Luca Guadagnino, reportedly in talks to direct. The saga, according to Time magazine – which named her one of the Top 100 most influential people on AI in 2024 – resulted in the Australian having 'the ear of policymakers around the world trying to regulate AI'. At CSET, Toner has a team of 60 people working on AI research for white papers or briefing policymakers focused on the use of AI in the military, workforce, biosecurity and cybersecurity sectors. 'A lot of my work focuses on some combination of AI, safety and security issues, the Chinese AI ecosystem and also what gets called frontier AI,' Toner said. Toner said the United States is concerned about losing the AI race to China and while US chip export controls make it harder for China to get compute power to compete with the US, the country was still making a 'serious push' on AI, as highlighted by the surprise success of Chinese generative AI model DeepSeek earlier this year. The Trump administration's attacks on research and bans on international students are a 'gift' to China in the AI race with the US, Toner said. 'Certainly it's a great gift to [China] the way that the US is currently attacking scientific research, and foreign talent – which is a huge proportion of the USA workforce – is immigrants, many of them coming from China,' she said. Sign up for Guardian Australia's breaking news email 'That is a big … boon to China in terms of competing with the US.' The AI boom has led to claims and concerns about a job wipeout caused by companies using AI to replace work that had otherwise been done by humans. Dario Amodei, the CEO of Anthropic, the company behind the generative AI model Claude, told Axios last week that AI could reduce entry-level white-collar jobs by 50% and result in 20% unemployment in the next five years. Toner said Amodei 'often says things that seem directionally right to me, but in terms of … timeline and numbers often seem quite aggressive' but added that disruption in the jobs market had already started to show. 'The kind of things that [language model-based AI] can do best at the moment … if you can give them a bite-size task – not a really long term project, but something that you might not need ages and ages to do and something where you still need human review,' she said. 'That's a lot of the sort of work that you give to interns or new grads in white-collar industries.' Experts have suggested companies that invested heavily in AI are now being pressed to show the results of that investment. Toner said while the real-world use of AI can generate a lot of value, it is less clear what business models and which players will benefit from that value. Dominant uses might be a mix of different AI services plugged into existing applications – like phone keyboards that can now transcribe voices – as well as stand-alone chatbots, but it's 'up in the air' which type of AI would actually dominate, she said. Sign up to Breaking News Australia Get the most important news as it breaks after newsletter promotion Turner said the push for profitability was less risky than the overall race to be first in AI advancements. 'It means that these companies are all making it up as they go along and figuring out as they go how to make trade-offs between getting products out the door, doing extra testing, putting in extra guardrails, putting in measures that are supposed to make the model more safe but also make it more annoying to use,' she said. 'They're figuring that all out on the fly, and … they're making those decisions while under pressure to go as fast as they can.' Turrner said she was worried about the idea of 'gradual disempowerment to AI' – 'meaning a world where we just gradually hand over more control over different parts of society and the economy and government to AI systems, and then realise a bit too late that it's not going the way that we wanted, but we can't really turn back'. She is most optimistic about AI's use in improving science and drug discovery and for self-driving services like Waymo in reducing fatalities on the roads. 'With AI, you never want to be looking for making the AI perfect, you want it to be better than the alternative. And when it comes to cars, the alternative is thousands of people dying per year. 'If you can improve on that, that's amazing. You're saving many, many people.' Toner joked that her friends had been sending her options on who might play her in the film. 'Any of the names that friends of mine have thrown my way are all these incredibly beautiful actresses,' she said. 'So I'll take any of those, whoever they choose.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store