Latest news with #AdvancingAI


Business Insider
13 hours ago
- Business
- Business Insider
AMD Stock Slips 2% Despite Analyst Updates
Advanced Micro Devices (AMD) stock was down on Friday after analysts updated their coverage of the semiconductor company's shares. These updates followed its Advancing AI event yesterday, which included new chip announcements. Confident Investing Starts Here: This spurred several top analysts to update their coverage of AMD stock: Oppenheimer analyst Rick Schafer reiterated a Hold rating. Wells Fargo analyst Aaron Rakers maintained a Buy rating and $120 price target. Evercore ISI analyst Mark Lipacis kept a Buy rating and increased his price target to $144 from $120. Bank of America Securities analyst Vivek Arya reiterated a Buy rating and $130 price target. Robert W. Baird analyst Tristan Gerra maintained a Buy rating and $140 price target. J.P. Morgan analyst Harlan Sur kept a Hold rating and $120 price target. Stifel Nicolaus analyst Ruben Roy reiterated a Buy rating and $132 price target. Roth MKM analyst Sujeeva De Silva maintained a Buy rating and increased his price target to $150 from $125. Benchmark Co. analyst Cody Acree kept a Buy rating and $170 price target. Morgan Stanley analyst Joseph Moore reiterated a Hold rating and $121 price target. Barclays analyst Thomas O'Malley maintained a Buy rating and $130 price target. Citi analyst Christopher Danely kept a Hold rating and increased his price target to $120 from $100. AMD Stock Movement Today AMD stock was down 1.95% on Friday morning, extending a 3.81% year-to-date decrease. The shares have also decreased 25.77% over the past 12 months. Is AMD Stock a Buy, Sell, or Hold? Turning to Wall Street, the analysts' consensus rating for AMD is Moderate Buy, based on 22 Buy and 11 Hold ratings over the past three months. With that comes an average AMD stock price target of $129.41, representing a potential 11.52% upside for the shares.

The Hindu
a day ago
- Business
- The Hindu
Google DeepMind builds AI model to predict cyclones; Foxconn sends 97% of India iPhone exports to U.S; AMD unveils AI server
Google DeepMind builds AI model to predict cyclones Google DeepMind and Google Research have launched a new website called Weather Lab that shares their AI weather models today. The model is able to predict cyclone formations, their route, intensity, size and shape up to 15 days ahead, Google said in a blog. The experimental AI model can also generate 50 different scenarios for the storm. The research teams have also released a new paper with details of the core along with an archive on Weather Lab of historical cyclone track data, evaluation and backtesting. Google also added that during internal testing they found that the model's predictions were as accurate and often more accurate than current physics-based methods, they have also partnered with the U.S. National Hurricance Center (NHC) to evaluate further how effective the model is. The Weather Lab website also shows a comparison between how AI models perform and how the traditional models perform. Foxconn sends 97% of India iPhone exports to U.S Nearly all the iPhones exported by Foxconn from India went to the United States between March and May, customs data showed, far above the 2024 average of 50% and a clear sign of Apple's efforts to bypass high U.S. tariffs imposed on China. The numbers, being reported by Reuters for the first time, show Apple has realigned its India exports to almost exclusively serve the U.S. market, when previously the devices were more widely distributed to countries including the Netherlands, the Czech Republic and Britain. During March-May, Foxconn exported iPhones worth $3.2 billion from India, with an average 97% shipped to the United States, compared to a 2024 average of 50.3%, according to commercially available customs data seen by Reuters. India iPhone shipments by Foxconn to the United States in May 2025 were worth nearly $1 billion, the second-highest ever after the record $1.3 billion worth of devices shipped in March. AMD unveils AI server Advanced Micro Devices CEO Lisa Su on Thursday unveiled a new AI server for 2026 that aims to challenge Nvidia's flagship offerings as OpenAI's CEO said the ChatGPT creator would adopt AMD's latest chips. Su took the stage at a developer conference in California, called 'Advancing AI' to discuss the MI350 series and MI400 series AI chips that she said would compete with Nvidia's Blackwell line of processors. The MI400 series of chips will be the basis of a new server called 'Helios' that AMD plans to release next year. The move comes as the competition between Nvidia and other AI chip firms has shifted away from selling individual chips to selling servers packed with scores or even hundreds of processors, woven together with networking chips from the same company. The AMD Helios servers will have 72 of AMD's MI400 series chips, making them comparable to Nvidia's current NVL72 servers.


Business Insider
2 days ago
- Business
- Business Insider
AMD Launches New Line of MI350 AI chips
Earlier today, AMD (AMD) launched its new MI350 line of AI chips and shared details about its next-generation MI400 GPUs at its Advancing AI event in San Jose. The MI350X and MI355X are built to compete with Nvidia's (NVDA) Blackwell chips by offering four times more AI compute power and 35 times better inferencing than AMD's previous generation. Each MI350 chip includes 288GB of HBM3E memory, more than Nvidia's 192GB per chip, although Nvidia pairs two chips for a total of 384GB. Confident Investing Starts Here: AMD will also offer MI350 platforms that combine up to 8 GPUs (2.3TB of memory), with air cooling for up to 64 GPUs and liquid cooling for larger setups of up to 128 GPUs. Furthermore, AMD previewed its MI400 chips, which will launch in 2026. These will offer up to 432GB of faster HBM4 memory and speeds of up to 19.6TB per second to compete with Nvidia's upcoming GB300 Blackwell Ultra and Rubin AI chips. Interestingly, to help developers access its GPUs, AMD launched the AMD Developer Cloud, which allows users to access MI300 and MI350 GPUs online without needing to buy them. This is similar to Nvidia's DGX Cloud Lepton service that was launched last month. However, AMD's stock performance has lagged behind Nvidia's. AMD is down about 24% over the past year and 0.2% year-to-date, while Nvidia has gained 19% over the past year and 7% year-to-date. Moreover, both companies were impacted by the U.S. export ban on AI chips to China. Indeed, AMD expects an $800 million hit, while Nvidia has written down $4.5 billion and anticipates missing out on $8 billion in sales this quarter. Is AMD a Buy, Sell, or Hold? Turning to Wall Street, analysts have a Moderate Buy consensus rating on AMD stock based on 22 Buys, 10 Holds, and zero Sells assigned in the past three months, as indicated by the graphic below. Furthermore, the average AMD price target of $127.93 per share implies 7.4% upside potential.

Yahoo
2 days ago
- Business
- Yahoo
AMD gains on Nvidia? Lisa Su reveals new chips in heated AI inference race
-- Advanced Micro Devices Inc (NASDAQ:AMD) made an aggressive bid for dominance in AI inference at its Advancing AI event Thursday, unveiling new chips that directly challenge NVIDIA Corporation's (NASDAQ:NVDA) supremacy in the data center GPU market. AMD claims its latest Instinct MI355X accelerators surpass Nvidia's most advanced Blackwell GPUs in inference performance while offering a significant cost advantage, a critical selling point as hyperscalers look to scale generative AI services affordably. The MI355X, which has just begun volume shipments, delivers a 35-fold generational leap in inference performance and, according to AMD, up to 40% more tokens-per-dollar compared to Nvidia's flagship chips. That performance boost, coupled with lower power consumption, is designed to help AMD undercut Nvidia's offerings in total cost of ownership at a time when major AI customers are re-evaluating procurement strategies. 'What has really changed is the demand for inference has grown significantly,' AMD CEO Lisa Su said at the event in San Jose. 'It says that we have really strong hardware, which we always knew, but it also shows that the open software frameworks have made tremendous progress.' AMD's argument hinges not just on silicon performance, but on architecture and economics. By pairing its GPUs with its own CPUs and networking chips inside open 'rack-scale' systems, branded Helios, AMD is building full-stack solutions to rival Nvidia's proprietary end-to-end ecosystem. These systems, launching next year with the MI400 series, were designed to enable hyperscale inference clusters while reducing energy and infrastructure costs. Su highlighted how companies like OpenAI, Meta Platforms Inc (NASDAQ:META), and Microsoft Corporation (NASDAQ:MSFT) are now running inference workloads on AMD chips, with OpenAI CEO Sam Altman confirming a close partnership on infrastructure innovation. 'It's gonna be an amazing thing,' Altman said during the event. 'When you first started telling me about the specs, I was like, there's no way, that just sounds totally crazy.' Oracle Corporation (NYSE:ORCL) Cloud Infrastructure intends to offer massive clusters of AMD chips, with plans to deploy up to 131,072 MI355X GPUs, positioning AMD as a scalable alternative to Nvidia's tightly integrated, and often more expensive, solutions. AMD officials emphasized the cost benefits, asserting that customers could achieve double-digit percent savings on power and capital expenditures when compared with Nvidia's GPUs. Despite the positive news, AMD shares were down roughly 2% ahead of market close. Wall Street remains cautious, but AMD's moves suggest it is committed to challenging Nvidia's leadership not only with performance parity, but also with a differentiated value and systems strategy. While Nvidia still commands more than 90% of the data center AI chip market, AMD's targeted push into inference, where workloads demand high efficiency and lower costs, marks a strategic front in the battle for AI dominance. With generative AI models driving a surge in inference demand across enterprises, AMD is betting that performance per dollar will matter more than ever. Related articles AMD gains on Nvidia? Lisa Su reveals new chips in heated AI inference race Wall Street weighs in on Boeing after digesting deadly Air India crash SharpLink Gaming stock tumbles after filing to offer common stock, warrants


Associated Press
2 days ago
- Business
- Associated Press
AMD Surpasses 30x25 Goal, Sets Ambitious New 20x Efficiency Target
At a Glance: At AMD, energy efficiency has long been a guiding core design principle aligned to our roadmap and product strategy. For more than a decade, we've set public, time-bound goals to dramatically increase the energy efficiency of our products and have consistently met and exceeded those targets. Today, I'm proud to share that we've done it again, and we're setting the next five-year vision for energy efficient design. Today at Advancing AI, we announced that AMD has surpassed our 30x25 goal, which we set in 2021 to improve the energy efficiency of AI-training and high-performance computing (HPC) nodes by 30x from 2020 to 2025.1 This was an ambitious goal, and we're proud to have exceeded it, but we're not stopping here. As AI continues to scale, and as we move toward true end-to-end design of full AI systems, it's more important than ever for us to continue our leadership in energy- efficient design work. That's why today, we're also setting our sights on a bold new target: a 20x improvement in rack-scale energy efficiency for AI training and inference by 2030, from a 2024 base year.2 Building on a Decade of Leadership This marks the third major milestone in a multi-decade effort to advance efficiency across our computing platforms. In 2020, we exceeded our 25x20 goal by improving the energy efficiency of AMD mobile processors 25-fold in just six years.3 The 30x25 goal built on that momentum, targeting AI and HPC workloads in accelerated nodes. And now, the 20x by 2030 rack-scale goal reflects the next frontier, not just focused on chips, but smarter and more efficient systems, from silicon to full rack integration to address data center level power requirements. Surpassing 30x25 Our 30x25 goal was rooted in a clear benchmark, to improve the energy efficiency of our accelerated compute nodes by 30x compared to a 2020 base year. This goal represented more than a 2.5x acceleration over industry trends from the previous five years (2015-2020). As of mid-2025, we've gone beyond that, achieving a 38x gain over the base system using a current configuration of four AMD Instinct™ MI355X GPUs and one AMD EPYC™ 5th Gen CPU.4 That equates to a 97% reduction in energy for the same performance compared to systems from just five years ago. We achieved this through deep architectural innovations, aggressive optimization of performance-per-watt, and relentless engineering across our CPU and GPU product lines. A New Goal for the AI Era As workloads scale and demand continues to rise, node-level efficiency gains won't keep pace. The most significant efficiency impact can be realized at the system level, where our 2030 goal is focused. We believe we can achieve 20x increase in rack-scale energy efficiency for AI training and inference from 2024 by 2030, which AMD estimates exceeds the industry improvement trend from 2018 to 2025 by almost 3x. This reflects performance-per-watt improvements across the entire rack, including CPUs, GPUs, memory, networking, storage and hardware-software co-design, based on our latest designs and roadmap projections. This shift from node to rack is made possible by our rapidly evolving end-to-end AI strategy and is key to scaling datacenter AI in a more sustainable way. What This Means in Practice A 20x rack-scale efficiency improvement at nearly 3x the prior industry rate has major implications. Using training for a typical AI model in 2025 as a benchmark, the gains could enable:5 These projections are based on AMD silicon and system design roadmap and a measurement methodology validated by energy-efficiency expert Dr. Jonathan Koomey. 'By grounding the 2030 target in system-level metrics and transparent methodology, AMD is raising the bar for the industry,' Dr. Koomey said. 'The target gains in rack-scale efficiency will enable others across the ecosystem, from model developers to cloud providers, to scale AI compute more sustainably and cost-effectively.' Looking Beyond Hardware Our 20x goal reflects what we control directly: hardware and system-level design. But we know that even greater delivered AI model efficiency gains will be possible, of up to 5x over the goal period, as software developers discover smarter algorithms and continue innovating with lower-precision approaches at current rates. When those factors are included, overall energy efficiency for training a typical AI model could improve by as much as 100x by 2030.6 While AMD is not claiming that full multiplier in our own goal, we're proud to provide the hardware foundation that enables it — and to support the open ecosystem and developer community working to unlock those gains. Whether through open standards, our open software approach with AMD ROCm™, or our close collaboration with our partners, AMD remains committed to helping innovators everywhere scale AI more efficiently. What Comes Next As we close one chapter with 30x25 and open the next with this new rack-scale goal, we remain committed to transparency, accountability, and measurable progress. This approach sets AMD apart and is necessary as we advance how the industry approaches efficiency as demand and deployment of AI continues to expand. We're excited to keep pushing the limits, not just of performance, but also what's possible when efficiency leads the way. As the goal progresses, we will continue to share updates on our progress and the effects these gains are enabling across the ecosystem. Footnotes Visit 3BL Media to see more multimedia and stories from AMD