Latest news with #MI355X


Time of India
18 hours ago
- Business
- Time of India
AMD unveils next-gen AI chips as it takes on Nvidia: 'For the first time, we...' CEO Lisa Su says
AMD has unveiled details about its Instinct MI400 series , the company's next generation of AI chips set to ship next year. CEO Lisa Su presented the chips at a launch event in San Jose, California, emphasising their design for "rack-scale" systems crucial for powering the massive AI computations of today and tomorrow. Su also claimed that AMD's MI355X can outperform Nvidia 's Blackwell chips. The MI400 series chips are designed to be assembled into a full server rack, dubbed Helios, which AMD described as a unified system capable of tying thousands of chips together, as per a report by CNBC. "For the first time, we architected every part of the rack as a unified system," Su explained, highlighting Helios as "really a rack that functions like a single, massive compute engine." OpenAI CEO Sam Altman endorses AMD chips A significant endorsement came from OpenAI CEO Sam Altman, who appeared on stage with Su. Altman expressed confidence in the new chips "When you first started telling me about the specs, I was like, there's no way, that just sounds totally crazy. It's gonna be an amazing thing," said Altman, whose company is a customer of Nvidia chips. OpenAI also confirmed it will be integrating the AMD chips. What is different in AMD's newest AI chips This "rack-scale" approach is vital for hyperscale AI clusters that span entire data centers, catering to the enormous power demands of cloud providers and developers of large language models, the report said. Su directly compared Helios to Nvidia's upcoming Vera Rubin racks, signaling AMD's intent to directly challenge its main rival. AMD's rack-scale technology aims to put its latest chips squarely in competition with Nvidia's Blackwell chips, which already offer configurations integrating 72 graphics-processing units. Nvidia currently holds a near-monopoly in the market for big data center GPUs, partly due to its early lead in developing essential AI software like CUDA. OpenAI, notably a significant Nvidia customer, has been providing feedback to AMD on its MI400 roadmap. AMD is positioning the MI400 chips, along with this year's MI355X chips, as a more cost-effective alternative to Nvidia's offerings. Su also claimed that AMD's MI355X can outperform Nvidia's Blackwell chips, despite Nvidia's proprietary CUDA software advantage, saying that AMD's "really strong hardware" and the "tremendous progress" made by open software frameworks. AI Masterclass for Students. Upskill Young Ones Today!– Join Now
Yahoo
a day ago
- Business
- Yahoo
AMD (AMD) Unveils 1,400W MI355X AI GPU to Challenge Nvidia's Blackwell
AMD (AMD, Financials) officially launched its Instinct MI355X GPU accelerator Wednesday, showcasing a massive leap in compute power and energy demands as it competes with Nvidia's Blackwell Ultra B300. Warning! GuruFocus has detected 3 Warning Signs with AMD. The MI355X is part of AMD's new CDNA 4 architecture and is optimized for AI inference. With support for FP4, FP6, FP8, and FP16 precision, the MI355X delivers up to 20.1 PFLOPS in FP4/FP6 workloads and 10.1 PFLOPS in FP8, slightly ahead of Nvidia's B300 at 15 FP4 PFLOPS. To support this performance, the MI355X consumes 1,400W peak, nearly doubling the 750W required by its predecessor, the MI300X. AMD expects some users may still air-cool the chip, but liquid cooling is the standard. The GPU includes 288 GB of HBM3E memory with bandwidth reaching 8 TB/s. A scaled 8-way configuration brings system-level performance to 161 PFLOPS (FP4) and 80.5 PFLOPS. While raw compute marks a win on paper, AMD still trails Nvidia in deployment scale and software ecosystem. Pegatron is reportedly preparing a 128-way MI350X system, but Nvidia remains dominant in large-scale AI training clusters. AMD's Chief Technology Officer Mark Papermaster said zettascale supercomputing by 2035 will require processors consuming up to 2,000W each. He projected that future AI systems may need nuclear-scale powerup to 500 MW per machine. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
a day ago
- Business
- Yahoo
AMD introduces new AI Infrastructure and accelerators
AMD has unveiled its vision for an open AI ecosystem at the Advancing AI 2025 event, highlighting a comprehensive integrated AI platform. The event also featured contributions from industry majors such as Meta, OpenAI, and Microsoft, who discussed their collaborations with AMD to advance AI solutions. Key announcements included the launch of the AMD Instinct MI350 Series accelerators, featuring the MI350X and MI355X models, which promise a fourfold increase in AI compute performance and a 35-fold improvement in inferencing capabilities compared to previous generations. The MI355X also offers significant price-performance advantages, generating up to 40% more tokens per dollar than competing products. The company showcased its open-standards rack-scale AI infrastructure, already deployed with the MI350 Series accelerators and 5th Gen AMD EPYC processors in hyperscaler environments like Oracle Cloud Infrastructure, with broader availability expected in the second half of 2025. Additionally, the company previewed its next-generation AI rack, "Helios," which will utilise the upcoming MI400 Series GPUs, anticipated to deliver up to ten times more performance for inference tasks. The latest version of AMD's open-source AI software stack, ROCm 7, was introduced to support the growing demands of generative AI and high-performance computing. ROCm 7 features enhanced compatibility with industry-standard frameworks and new development tools to facilitate AI development. AMD's MI350 Series has achieved a 38-fold improvement in energy efficiency, surpassing its five-year target, and the company has set a new goal for 2030 to achieve a 20-fold increase in rack-scale energy efficiency, according to the company. Additionally, the company has introduced the AMD Developer Cloud to support developers in AI projects. AMD chair and CEO Dr Lisa Su said: 'AMD is driving AI innovation at an unprecedented pace, highlighted by the launch of our AMD Instinct MI350 series accelerators, advances in our next generation AMD 'Helios' rack-scale solutions, and growing momentum for our ROCm open software stack. 'We are entering the next phase of AI, driven by open standards, shared innovation and AMD's expanding leadership across a broad ecosystem of hardware and software partners who are collaborating to define the future of AI.' Meta highlighted its use of MI300X for Llama 3 and Llama 4 inference, while OpenAI's CEO Sam Altman emphasised the importance of optimised hardware and software in AI infrastructure. Oracle Cloud Infrastructure announced its adoption of AMD's open rack-scale AI infrastructure, and other partners, including HUMAIN, Microsoft, Cohere, Red Hat, Astera Labs, and Marvell, shared their initiatives to enhance AI capabilities in collaboration with AMD. Recently, AMD acquired the Untether AI team, known for developing energy-efficient and fast AI inference chips for edge environments and enterprise data centres, according to CRN. "AMD introduces new AI Infrastructure and accelerators " was originally created and published by Verdict, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Channel Post MEA
a day ago
- Business
- Channel Post MEA
AMD Unveils Vision For Open AI Ecosystem
AMD delivered its comprehensive, end-to-end integrated AI platform vision and introduced its open, scalable rack-scale AI infrastructure built on industry standards at its 2025 Advancing AI event. Dr. Lisa Su, chairman and CEO of AMD, emphasized the company's role in accelerating AI innovation. 'We are entering the next phase of AI, driven by open standards, shared innovation and AMD's expanding leadership across a broad ecosystem of hardware and software partners who are collaborating to define the future of AI,' Su said. AMD announced a broad portfolio of hardware, software and solutions to power the full spectrum of AI: AMD unveiled the Instinct MI350 Series GPUs , setting a new benchmark for performance, efficiency and scalability in generative AI and high-performance computing. The MI350 Series, consisting of both Instinct MI350X and MI355X GPUs and platforms, delivers a 4x, generation-on-generation AI compute increase and a 35x generational leap in inferencing, paving the way for transformative AI solutions across industries. MI355X also delivers significant price-performance gains, generating up to 40% more tokens-per-dollar compared to competing solutions. , setting a new benchmark for performance, efficiency and scalability in generative AI and high-performance computing. The MI350 Series, consisting of both Instinct MI350X and MI355X GPUs and platforms, delivers a 4x, generation-on-generation AI compute increase and a 35x generational leap in inferencing, paving the way for transformative AI solutions across industries. MI355X also delivers significant price-performance gains, generating up to 40% more tokens-per-dollar compared to competing solutions. AMD demonstrated end-to-end, open-standards rack-scale AI infrastructure —already rolling out with AMD Instinct MI350 Series accelerators, 5th Gen AMD EPYC processors and AMD Pensando Pollara NICs in hyperscaler deployments such as Oracle Cloud Infrastructure (OCI) and set for broad availability in 2H 2025. —already rolling out with AMD Instinct MI350 Series accelerators, 5th Gen AMD EPYC processors and AMD Pensando Pollara NICs in hyperscaler deployments such as Oracle Cloud Infrastructure (OCI) and set for broad availability in 2H 2025. AMD also previewed its next generation AI rack called 'Helios .' It will be built on the next-generation AMD Instinct MI400 Series GPUs – which compared to the previous generation are expected to deliver up to 10x more performance running inference on Mixture of Experts models, the 'Zen 6'-based AMD EPYC 'Venice' CPUs and AMD Pensando 'Vulcano' NICs. .' It will be built on the next-generation AMD Instinct MI400 Series GPUs – which compared to the previous generation are expected to deliver up to 10x more performance running inference on Mixture of Experts models, the 'Zen 6'-based AMD EPYC 'Venice' CPUs and AMD Pensando 'Vulcano' NICs. The latest version of the AMD open-source AI software stack, ROCm 7 , is engineered to meet the growing demands of generative AI and high-performance computing workloads—while dramatically improving developer experience across the board. ROCm 7 features improved support for industry-standard frameworks, expanded hardware compatibility and new development tools, drivers, APIs and libraries to accelerate AI development and deployment. , is engineered to meet the growing demands of generative AI and high-performance computing workloads—while dramatically improving developer experience across the board. ROCm 7 features improved support for industry-standard frameworks, expanded hardware compatibility and new development tools, drivers, APIs and libraries to accelerate AI development and deployment. The Instinct MI350 Series exceeded AMD's five-year goal to improve the energy efficiency of AI training and high-performance computing nodes by 30x, ultimately delivering a 38x improvement. AMD also unveiled a new 2030 goal to deliver a 20x increase in rack-scale energy efficiency from a 2024 base year, enabling a typical AI model that today requires more than 275 racks to be trained in fewer than one fully utilized rack by 2030, using 95% less electricity. of AI training and high-performance computing nodes by 30x, ultimately delivering a 38x improvement. AMD also unveiled a new 2030 goal to deliver a 20x increase in rack-scale energy efficiency from a 2024 base year, enabling a typical AI model that today requires more than 275 racks to be trained in fewer than one fully utilized rack by 2030, using 95% less electricity. AMD also announced the broad availability of the AMD Developer Cloud for the global developer and open-source communities. Purpose-built for rapid, high-performance AI development, users will have access to a fully managed cloud environment with the tools and flexibility to get started with AI projects – and grow without limits. With ROCm 7 and the AMD Developer Cloud, AMD is lowering barriers and expanding access to next-gen compute. Strategic collaborations with leaders like Hugging Face, OpenAI and Grok are proving the power of co-developed, open solutions. Broad Partner Ecosystem Showcases AI Progress Powered by AMD Today, seven of the 10 largest model builders and Al companies are running production workloads on Instinct accelerators. Among those companies are Meta, OpenAI, Microsoft and xAI, who joined AMD and other partners at Advancing AI, to discuss how they are working with AMD for AI solutions to train today's leading AI models, power inference at scale and accelerate AI exploration and development: Meta detailed how Instinct MI300X is broadly deployed for Llama 3 and Llama 4 inference. Meta shared excitement for MI350 and its compute power, performance-per-TCO and next-generation memory. Meta continues to collaborate closely with AMD on AI roadmaps, including plans for the Instinct MI400 Series platform. detailed how Instinct MI300X is broadly deployed for Llama 3 and Llama 4 inference. Meta shared excitement for MI350 and its compute power, performance-per-TCO and next-generation memory. Meta continues to collaborate closely with AMD on AI roadmaps, including plans for the Instinct MI400 Series platform. OpenAI CEO Sam Altman discussed the importance of holistically optimized hardware, software and algorithms and OpenAI's close partnership with AMD on AI infrastructure, with research and GPT models on Azure in production on MI300X, as well as deep design engagements on MI400 Series platforms. CEO Sam Altman discussed the importance of holistically optimized hardware, software and algorithms and OpenAI's close partnership with AMD on AI infrastructure, with research and GPT models on Azure in production on MI300X, as well as deep design engagements on MI400 Series platforms. Oracle Cloud Infrastructure (OCI) is among the first industry leaders to adopt the AMD open rack-scale AI infrastructure with AMD Instinct MI355X GPUs. OCI leverages AMD CPUs and GPUs to deliver balanced, scalable performance for AI clusters, and announced it will offer zettascale AI clusters accelerated by the latest AMD Instinct processors with up to 131,072 MI355X GPUs to enable customers to build, train and inference AI at scale. (OCI) is among the first industry leaders to adopt the AMD open rack-scale AI infrastructure with AMD Instinct MI355X GPUs. OCI leverages AMD CPUs and GPUs to deliver balanced, scalable performance for AI clusters, and announced it will offer zettascale AI clusters accelerated by the latest AMD Instinct processors with up to 131,072 MI355X GPUs to enable customers to build, train and inference AI at scale. HUMAIN discussed its landmark agreement with AMD to build open, scalable, resilient and cost-efficient AI infrastructure leveraging the full spectrum of computing platforms only AMD can provide. discussed its landmark agreement with AMD to build open, scalable, resilient and cost-efficient AI infrastructure leveraging the full spectrum of computing platforms only AMD can provide. Microsoft announced Instinct MI300X is now powering both proprietary and open-source models in production on Azure. announced Instinct MI300X is now powering both proprietary and open-source models in production on Azure. Cohere shared that its high-performance, scalable Command models are deployed on Instinct MI300X, powering enterprise-grade LLM inference with high throughput, efficiency and data privacy. Red Hat d escribed how its expanded collaboration with AMD enables production-ready AI environments, with AMD Instinct GPUs on Red Hat OpenShift AI delivering powerful, efficient AI processing across hybrid cloud environments. escribed how its expanded collaboration with AMD enables production-ready AI environments, with AMD Instinct GPUs on Red Hat OpenShift AI delivering powerful, efficient AI processing across hybrid cloud environments. Astera Labs highlighted how the open UALink ecosystem accelerates innovation and delivers greater value to customers and shared plans to offer a comprehensive portfolio of UALink products to support next-generation AI infrastructure. highlighted how the open UALink ecosystem accelerates innovation and delivers greater value to customers and shared plans to offer a comprehensive portfolio of UALink products to support next-generation AI infrastructure. Marvell joined AMD to highlight its collaboration as part of the UALink Consortium developing an open interconnect, bringing the ultimate flexibility for AI infrastructure.


Business Insider
a day ago
- Business
- Business Insider
AMD Launches New Line of MI350 AI chips
Earlier today, AMD (AMD) launched its new MI350 line of AI chips and shared details about its next-generation MI400 GPUs at its Advancing AI event in San Jose. The MI350X and MI355X are built to compete with Nvidia's (NVDA) Blackwell chips by offering four times more AI compute power and 35 times better inferencing than AMD's previous generation. Each MI350 chip includes 288GB of HBM3E memory, more than Nvidia's 192GB per chip, although Nvidia pairs two chips for a total of 384GB. Confident Investing Starts Here: AMD will also offer MI350 platforms that combine up to 8 GPUs (2.3TB of memory), with air cooling for up to 64 GPUs and liquid cooling for larger setups of up to 128 GPUs. Furthermore, AMD previewed its MI400 chips, which will launch in 2026. These will offer up to 432GB of faster HBM4 memory and speeds of up to 19.6TB per second to compete with Nvidia's upcoming GB300 Blackwell Ultra and Rubin AI chips. Interestingly, to help developers access its GPUs, AMD launched the AMD Developer Cloud, which allows users to access MI300 and MI350 GPUs online without needing to buy them. This is similar to Nvidia's DGX Cloud Lepton service that was launched last month. However, AMD's stock performance has lagged behind Nvidia's. AMD is down about 24% over the past year and 0.2% year-to-date, while Nvidia has gained 19% over the past year and 7% year-to-date. Moreover, both companies were impacted by the U.S. export ban on AI chips to China. Indeed, AMD expects an $800 million hit, while Nvidia has written down $4.5 billion and anticipates missing out on $8 billion in sales this quarter. Is AMD a Buy, Sell, or Hold? Turning to Wall Street, analysts have a Moderate Buy consensus rating on AMD stock based on 22 Buys, 10 Holds, and zero Sells assigned in the past three months, as indicated by the graphic below. Furthermore, the average AMD price target of $127.93 per share implies 7.4% upside potential.