
AMD reveals next-generation AI chips with OpenAI CEO Sam Altman
The MI400 chips will be able to be assembled into a full server rack called Helios, AMD said, which will enable thousands of the chips to be tied together in a way that they can be used as one "rack-scale" system.
"For the first time, we architected every part of the rack as a unified system," AMD CEO Lisa Su said at a launch event in San Jose, California, on Thursday.
OpenAI CEO Sam Altman appeared on stage on with Su and said his company would use the AMD chips.
"When you first started telling me about the specs, I was like, there's no way, that just sounds totally crazy," Altman said. "It's gonna be an amazing thing."
AMD's rack-scale setup will make the chips look to a user like one system, which is important for most artificial intelligence customers like cloud providers and companies that develop large language models. Those customers want "hyperscale" clusters of AI computers that can span entire data centers and use massive amounts of power.
"Think of Helios as really a rack that functions like a single, massive compute engine," said Su, comparing it against Nvidia's Vera Rubin racks, which are expected to be released next year.
AMD's rack-scale technology also enables its latest chips to compete with Nvidia's Blackwell chips, which already come in configurations with 72 graphics-processing units stitched together. Nvidia is AMD's primary and only rival in big data center GPUs for developing and deploying AI applications.
OpenAI — a notable Nvidia customer — has been giving AMD feedback on its MI400 roadmap, the chip company said. With the MI400 chips and this year's MI355X chips, AMD is planning to compete against rival Nvidia on price, with a company executive telling reporters on Wednesday that the chips will cost less to operate thanks to lower power consumption, and that AMD is undercutting Nvidia with "aggressive" prices.
So far, Nvidia has dominated the market for data center GPUs, partially because it was the first company to develop the kind of software needed for AI developers to take advantage of chips originally designed to display graphics for 3D games. Over the past decade, before the AI boom, AMD focused on competing against Intel in server CPUs.
Su said that AMD's MI355X can outperform Nvidia's Blackwell chips, despite Nvidia using its "proprietary" CUDA software.
"It says that we have really strong hardware, which we always knew, but it also shows that the open software frameworks have made tremendous progress," Su said.
AMD shares are flat so far in 2025, signaling that Wall Street doesn't yet see it as a major threat to Nvidia's dominance.
Andrew Dieckmann, AMD's general manger for data center GPUs, said Wednesday that AMD's AI chips would cost less to operate and less to acquire.
"Across the board, there is a meaningful cost of acquisition delta that we then layer on our performance competitive advantage on top of, so significant double-digit percentage savings," Dieckmann said.
Over the next few years, big cloud companies and countries alike are poised to spend hundreds of billions of dollars to build new data center clusters around GPUs in order to accelerate the development of cutting-edge AI models. That includes $300 billion this year alone in planned capital expenditures from megacap technology companies.
AMD is expecting the total market for AI chips to exceed $500 billion by 2028, although it hasn't said how much of that market it can claim — Nvidia has over 90% of the market currently, according to analyst estimates.
Both companies have committed to releasing new AI chips on an annual basis, as opposed to a biannual basis, emphasizing how fierce competition has become and how important bleeding-edge AI chip technology is for companies like Microsoft, Oracle and Amazon.
AMD has bought or invested in 25 AI companies in the past year, Su said, including the purchase of ZT Systems earlier this year, a server maker that developed the technology AMD needed to build its rack-sized systems.
"These AI systems are getting super complicated, and full-stack solutions are really critical," Su said.
Currently, the most advanced AMD AI chip being installed from cloud providers is its Instinct MI355X, which the company said started shipping in production last month. AMD said that it would be available for rent from cloud providers starting in the third quarter.
Companies building large data center clusters for AI want alternatives to Nvidia, not only to keep costs down and provide flexibility, but also to fill a growing need for "inference," or the computing power needed for actually deploying a chatbot or generative AI application, which can use much more processing power than traditional server applications.
"What has really changed is the demand for inference has grown significantly," Su said.
AMD officials said Thursday that they believe their new chips are superior for inference to Nvidia's. That's because AMD's chips are equipped with more high-speed memory, which allows bigger AI models to run on a single GPU.
The MI355X has seven times the amount of computing power as its predecessor, AMD said. Those chips will be able to compete with Nvidia's B100 and B200 chips, which have been shipping since late last year.
AMD said that its Instinct chips have been adopted by seven of the 10 largest AI customers, including OpenAI, Tesla, xAI, and Cohere.
Oracle plans to offer clusters with over 131,000 MI355X chips to its customers, AMD said.
Officials from Meta said Thursday that they were using clusters of AMD's CPUs and GPUs to run inference for its Llama model, and that it plans to buy AMD's next-generation servers.
A Microsoft representative said that it uses AMD chips to serve its Copilot AI features.
AMD declined to say how much its chips cost — it doesn't sell chips by themselves, and end-users usually buy them through a hardware company like Dell or Super Micro Computer — but the company is planning for the MI400 chips to compete on price.
The Santa Clara company is pairing its GPUs alongside its CPUs and networking chips from its 2022 acquisition of Pensando to build its Helios racks. That means greater adoption of its AI chips should also benefit the rest of AMD's business. It's also using an open-source networking technology to closely integrate its rack systems, called UALink, versus Nvidia's proprietary NVLink.
AMD claims its MI355X can deliver 40% more tokens — a measure of AI output — per dollar than Nvidia's chips because its chips use less power than its rival's.
Data center GPUs can cost tens of thousands of dollars per chip, and cloud companies usually buy them in large quantities.
AMD's AI chip business is still much smaller than Nvidia's. It said it had $5 billion in AI sales in its fiscal 2024, but JP Morgan analysts are expecting 60% growth in the category this year.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
32 minutes ago
- Yahoo
'Someone is going to lose a phenomenal amount of money' says OpenAI CEO Sam Altman about unwise AI investment. 'When bubbles happen, smart people get overexcited about a kernel of truth'
When you buy through links on our articles, Future and its syndication partners may earn a commission. OpenAI CEO Sam Altman spoke to assembled reporters at a dinner in San Francisco late last week on the topic of, you guessed it, AI, the applications of AI, and the vast sums of money moving behind the scenes to fund it. Despite being one of the most vocal advocates of the tech, Altman had some words of caution for investors jumping on the artificial intelligence train. According to The Verge, Altman said it was "insane" that AI startups consisting of "three people and an idea" are receiving huge amounts of funding off the back of incredibly high company valuations, describing it as "not rational behaviour." "Someone is going to lose a phenomenal amount of money. We don't know who, and a lot of people are going to make a phenomenal amount of money,' said Altman. "When bubbles happen, smart people get overexcited about a kernel of truth. If you look at most of the bubbles in history, like the tech bubble, there was a real thing." said Altman, referencing the infamous dot-com bubble of the late 1990s. "Tech was really important. The internet was a really big deal. People got overexcited." That being said, Altman stopped short of calling investment in AI overall a bad idea for the economy in general: 'My personal belief, although I may turn out to be wrong, is that, on the whole, this would be a huge net win." At the same dinner, Altman confirmed that OpenAI would still be spending vast amounts of money (partially provided, presumably, by the likes of Softbank and the Dragoneer Investment Group in OpenAI's latest $8.3 billion funding round) to keep the company at the top of the AI financial leaderbooks. "You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future," Altman said. "You should expect a bunch of economists to wring their hands." Well, it certainly appears to cost a whole lot of moolah just to keep the good ship OpenAI afloat. The company has raised staggering sums of cash over the past decade to develop and run its various AI implementations, the most famous of which being ChatGPT. Reports last year indicated that OpenAI had spent $8.5 billion on LLM training and staffing for its generative AI efforts, while other analysts have predicted it costs $700,000 a day to run ChatGPT alone. The Information recently projected that OpenAI would be burning through $20 billion in cash flow by 2027, with the company said to be hopeful that investors like Softbank would stump up another $30 to $40 billion to continue funding its operations. Still, those spending figures don't appear to be in the trillions yet, although that estimated sum is perhaps of little surprise to those of us that keep an eye on AI data center expansion. Given that Altman's rival, Elon Musk, has been booting up and expanding xAI's Colossus supercomputer with incredible speed, and with the news that Meta is expanding its data center operations at such a rate it's currently having to house a significant portion of its racks in nearby tents, OpenAI will feel the need to keep up—and to do that it needs to spend (and raise) huge amounts of cash over the next few years. One would assume that Altman is confident enough in his company's efforts to place its investors on the "going to make phenomenal sums of money" side of things, but his comments should perhaps serve as a warning to those looking to jump in with both feet without correctly judging the landing. Someone has to lose in the great AI race, I suppose. And as to which companies survive, and which come to a sticky end? That remains very much an open question for now. Error al recuperar los datos Inicia sesión para acceder a tu cartera de valores Error al recuperar los datos Error al recuperar los datos Error al recuperar los datos Error al recuperar los datos


CNBC
36 minutes ago
- CNBC
SoftBank Group shares plunge over 9% as Asian tech stocks decline
Shares of SoftBank Group plunged as much as 9.17% Wednesday, as technology stock in Asia declined, tracking losses in U.S. peers overnight. The Japanese tech-focused investment firm saw shares drop for a second consecutive session, following its announcement of a $2 billion investment in Intel. Intel shares rose 6.97% to close at $25.31 Tuesday stateside. Other Japanese tech stocks also declined, with semiconductor giant Advantest falling as much as 6.27%. Meanwhile, shares in Renesas Electronics and Tokyo Electron were last seen trading 2.46% and 0.75% lower, respectively. Technology companies in South Korea, Taiwan and Hong Kong, also fell after U.S. tech stocks dropped overnight spurred by declines in artificial intelligence darling Nvidia's shares overnight. U.S. Commerce Secretary Howard Lutnick is considering the federal government taking equity stakes in semiconductor companies that get funding under the CHIPS Act for building plants in the U.S, sources familiar with the matter told Reuters. The U.S. CHIPS and Science Act seeks to boost the country's semiconductor industry, scientific research and innovation. Shares of Taiwanese chip company TSMC and manufacturer Hon Hai Precision Industry — known globally as Foxconn — declined 1.69% and 2.16%, respectively. TSMC manufactures Nvidia's high-performance graphics processing units that help power large language models, while Foxconn has a strategic partnership with Nvidia to build "AI factories." Meanwhile, South Korean tech stocks mostly fell with shares of chipmaker SK Hynix down 3.33%. Samsung Electronics, however, rose 0.75%. TSMC, Samsung and SK Hynix are among companies that have received funding under the CHIPS Act. Over in Hong Kong, the Hang Seng Tech index lost 0.87% in early trade. The worst performing stocks on the index were Kuaishou Technology which declined 4.8%, JD Health International which dropped 3.31% and Horizon Robotics which lost 2.29%. Losses were also seen tech majors Alibaba Group, down 1.44%, and Xiaomi Corp that lost 1.34%.


Business Insider
an hour ago
- Business Insider
Stock Market News Review: SPY, QQQ Tumble on Alarming AI Report as Momentum Fades
Both the S&P 500 ETF (SPY) and the Nasdaq 100 ETF (QQQ) finished Tuesday's trading session in negative territory following a report from MIT that cast doubt on the sustainability of AI. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. The report estimates that U.S. companies have invested between $35 and $40 billion in AI, though the returns have been underwhelming. 'Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable [profit and loss] impact,' said MIT. The report surveyed hundreds of leaders and employees and collected data from 300 public AI announcements. Over the weekend, OpenAI CEO Sam Altman said that he believes the AI industry is experiencing a bubble, reported The Verge. 'I do think some investors are likely to lose a lot of money, and I don't want to minimize that, that sucks,' Altman said. 'There will be periods of irrational exuberance.' Meanwhile, the White House announced that President Trump is working to set up a bilateral meeting between Ukrainian President Volodymyr Zelenskyy and Russian President Vladimir Putin in an attempt to secure a ceasefire or truce. Trump added that he was open to attending the meeting. Furthermore, Trump has pledged air support for Ukraine as part of a security guarantee package while insisting that U.S. troops would not set foot on Ukrainian territory. Discussions surrounding these guarantees between the U.S., Ukraine, and several other European nations are set to begin in the coming days. Trump's efforts to broker peace between several nations haven't exactly improved his ratings. According to a Reuters/Ipsos poll ended August 18, Trump's approval rating is still at a term-low of 40%, remaining unchanged from late July. 54% of the respondents worried that Trump was too closely aligned with Russia. Trump met with Putin last week in Anchorage, Alaska to try and resolve the Russia-Ukraine war. To end on a positive note, S&P Global affirmed the U.S. long-term credit rating of AA+, citing elevated tariff revenue that is expected to offset the tax breaks and spending measures from The One Big Beautiful Bill. 'Amid the rise in effective tariff rates, we expect meaningful tariff revenue to generally offset weaker fiscal outcomes that might otherwise be associated with the recent fiscal legislation, which contains both cuts and increases in tax and spending,' said S&P.