
AMD Unveils Vision For Open AI Ecosystem
AMD delivered its comprehensive, end-to-end integrated AI platform vision and introduced its open, scalable rack-scale AI infrastructure built on industry standards at its 2025 Advancing AI event.
Dr. Lisa Su, chairman and CEO of AMD, emphasized the company's role in accelerating AI innovation. 'We are entering the next phase of AI, driven by open standards, shared innovation and AMD's expanding leadership across a broad ecosystem of hardware and software partners who are collaborating to define the future of AI,' Su said.
AMD announced a broad portfolio of hardware, software and solutions to power the full spectrum of AI: AMD unveiled the Instinct MI350 Series GPUs , setting a new benchmark for performance, efficiency and scalability in generative AI and high-performance computing. The MI350 Series, consisting of both Instinct MI350X and MI355X GPUs and platforms, delivers a 4x, generation-on-generation AI compute increase and a 35x generational leap in inferencing, paving the way for transformative AI solutions across industries. MI355X also delivers significant price-performance gains, generating up to 40% more tokens-per-dollar compared to competing solutions.
, setting a new benchmark for performance, efficiency and scalability in generative AI and high-performance computing. The MI350 Series, consisting of both Instinct MI350X and MI355X GPUs and platforms, delivers a 4x, generation-on-generation AI compute increase and a 35x generational leap in inferencing, paving the way for transformative AI solutions across industries. MI355X also delivers significant price-performance gains, generating up to 40% more tokens-per-dollar compared to competing solutions. AMD demonstrated end-to-end, open-standards rack-scale AI infrastructure —already rolling out with AMD Instinct MI350 Series accelerators, 5th Gen AMD EPYC processors and AMD Pensando Pollara NICs in hyperscaler deployments such as Oracle Cloud Infrastructure (OCI) and set for broad availability in 2H 2025.
—already rolling out with AMD Instinct MI350 Series accelerators, 5th Gen AMD EPYC processors and AMD Pensando Pollara NICs in hyperscaler deployments such as Oracle Cloud Infrastructure (OCI) and set for broad availability in 2H 2025. AMD also previewed its next generation AI rack called 'Helios .' It will be built on the next-generation AMD Instinct MI400 Series GPUs – which compared to the previous generation are expected to deliver up to 10x more performance running inference on Mixture of Experts models, the 'Zen 6'-based AMD EPYC 'Venice' CPUs and AMD Pensando 'Vulcano' NICs.
.' It will be built on the next-generation AMD Instinct MI400 Series GPUs – which compared to the previous generation are expected to deliver up to 10x more performance running inference on Mixture of Experts models, the 'Zen 6'-based AMD EPYC 'Venice' CPUs and AMD Pensando 'Vulcano' NICs. The latest version of the AMD open-source AI software stack, ROCm 7 , is engineered to meet the growing demands of generative AI and high-performance computing workloads—while dramatically improving developer experience across the board. ROCm 7 features improved support for industry-standard frameworks, expanded hardware compatibility and new development tools, drivers, APIs and libraries to accelerate AI development and deployment.
, is engineered to meet the growing demands of generative AI and high-performance computing workloads—while dramatically improving developer experience across the board. ROCm 7 features improved support for industry-standard frameworks, expanded hardware compatibility and new development tools, drivers, APIs and libraries to accelerate AI development and deployment. The Instinct MI350 Series exceeded AMD's five-year goal to improve the energy efficiency of AI training and high-performance computing nodes by 30x, ultimately delivering a 38x improvement. AMD also unveiled a new 2030 goal to deliver a 20x increase in rack-scale energy efficiency from a 2024 base year, enabling a typical AI model that today requires more than 275 racks to be trained in fewer than one fully utilized rack by 2030, using 95% less electricity.
of AI training and high-performance computing nodes by 30x, ultimately delivering a 38x improvement. AMD also unveiled a new 2030 goal to deliver a 20x increase in rack-scale energy efficiency from a 2024 base year, enabling a typical AI model that today requires more than 275 racks to be trained in fewer than one fully utilized rack by 2030, using 95% less electricity. AMD also announced the broad availability of the AMD Developer Cloud for the global developer and open-source communities. Purpose-built for rapid, high-performance AI development, users will have access to a fully managed cloud environment with the tools and flexibility to get started with AI projects – and grow without limits. With ROCm 7 and the AMD Developer Cloud, AMD is lowering barriers and expanding access to next-gen compute. Strategic collaborations with leaders like Hugging Face, OpenAI and Grok are proving the power of co-developed, open solutions.
Broad Partner Ecosystem Showcases AI Progress Powered by AMD
Today, seven of the 10 largest model builders and Al companies are running production workloads on Instinct accelerators. Among those companies are Meta, OpenAI, Microsoft and xAI, who joined AMD and other partners at Advancing AI, to discuss how they are working with AMD for AI solutions to train today's leading AI models, power inference at scale and accelerate AI exploration and development: Meta detailed how Instinct MI300X is broadly deployed for Llama 3 and Llama 4 inference. Meta shared excitement for MI350 and its compute power, performance-per-TCO and next-generation memory. Meta continues to collaborate closely with AMD on AI roadmaps, including plans for the Instinct MI400 Series platform.
detailed how Instinct MI300X is broadly deployed for Llama 3 and Llama 4 inference. Meta shared excitement for MI350 and its compute power, performance-per-TCO and next-generation memory. Meta continues to collaborate closely with AMD on AI roadmaps, including plans for the Instinct MI400 Series platform. OpenAI CEO Sam Altman discussed the importance of holistically optimized hardware, software and algorithms and OpenAI's close partnership with AMD on AI infrastructure, with research and GPT models on Azure in production on MI300X, as well as deep design engagements on MI400 Series platforms.
CEO Sam Altman discussed the importance of holistically optimized hardware, software and algorithms and OpenAI's close partnership with AMD on AI infrastructure, with research and GPT models on Azure in production on MI300X, as well as deep design engagements on MI400 Series platforms. Oracle Cloud Infrastructure (OCI) is among the first industry leaders to adopt the AMD open rack-scale AI infrastructure with AMD Instinct MI355X GPUs. OCI leverages AMD CPUs and GPUs to deliver balanced, scalable performance for AI clusters, and announced it will offer zettascale AI clusters accelerated by the latest AMD Instinct processors with up to 131,072 MI355X GPUs to enable customers to build, train and inference AI at scale.
(OCI) is among the first industry leaders to adopt the AMD open rack-scale AI infrastructure with AMD Instinct MI355X GPUs. OCI leverages AMD CPUs and GPUs to deliver balanced, scalable performance for AI clusters, and announced it will offer zettascale AI clusters accelerated by the latest AMD Instinct processors with up to 131,072 MI355X GPUs to enable customers to build, train and inference AI at scale. HUMAIN discussed its landmark agreement with AMD to build open, scalable, resilient and cost-efficient AI infrastructure leveraging the full spectrum of computing platforms only AMD can provide.
discussed its landmark agreement with AMD to build open, scalable, resilient and cost-efficient AI infrastructure leveraging the full spectrum of computing platforms only AMD can provide. Microsoft announced Instinct MI300X is now powering both proprietary and open-source models in production on Azure.
announced Instinct MI300X is now powering both proprietary and open-source models in production on Azure. Cohere shared that its high-performance, scalable Command models are deployed on Instinct MI300X, powering enterprise-grade LLM inference with high throughput, efficiency and data privacy.
Red Hat d escribed how its expanded collaboration with AMD enables production-ready AI environments, with AMD Instinct GPUs on Red Hat OpenShift AI delivering powerful, efficient AI processing across hybrid cloud environments.
escribed how its expanded collaboration with AMD enables production-ready AI environments, with AMD Instinct GPUs on Red Hat OpenShift AI delivering powerful, efficient AI processing across hybrid cloud environments. Astera Labs highlighted how the open UALink ecosystem accelerates innovation and delivers greater value to customers and shared plans to offer a comprehensive portfolio of UALink products to support next-generation AI infrastructure.
highlighted how the open UALink ecosystem accelerates innovation and delivers greater value to customers and shared plans to offer a comprehensive portfolio of UALink products to support next-generation AI infrastructure. Marvell joined AMD to highlight its collaboration as part of the UALink Consortium developing an open interconnect, bringing the ultimate flexibility for AI infrastructure.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Arabian Post
9 hours ago
- Arabian Post
G42 Plants Flag in London to Power Europe's AI Ambition
G42 has unveiled a new London‑based subsidiary, G42 Europe & UK, designed to amplify its AI infrastructure and services across the United Kingdom and continental Europe. The entity, co‑chaired by Omar Mir and Marty Edelman, will deliver end‑to‑end AI offerings—from advisory and model deployment to supercomputing infrastructure—for sectors including finance, healthcare, energy and manufacturing. The launch comes as G42 continues its strategic European rollout, following the roll‑out of AI compute clusters and data‑centre capabilities in France and Italy. Positioned at the heart of London, the new hub aims to serve as a local nexus for AI innovation, bridging the gulf between regulatory frameworks and technological delivery. Omar Mir, an international board member at World Wide Technology with more than two decades of experience in 5G, cloud and AI services across Europe and the Middle East, will co‑lead the initiative. His counterpart, Marty Edelman, G42's group general counsel, brings deep expertise in legal oversight and governance—a reflection of G42's emphasis on secure, sovereign deployment in regulated markets. ADVERTISEMENT Mir emphasised the vision: 'Our goal is to harness G42's proven AI expertise and localise it for European and UK businesses—fuelling digital transformation, enhancing competitiveness, and building resilient, sovereign AI infrastructure in partnership with public and private stakeholders'. Edelman added that the UK and Europe represent dynamic markets with significant scope for AI‑driven innovation, and that a London hub enables the company to align closely with local regulatory regimes. G42 Europe & UK will leverage the group's global computing backbone—comprising supercomputing nodes, data centres and AI‑modeling platforms—to deliver a full spectrum of capabilities. Services will span strategic advisory, AI model development, infrastructure deployment and managed services, all with a focus on local data sovereignty and regulatory compliance. The initiative complements G42's earlier announcements of strategic investments in AI infrastructure in France and Italy. In Italy, G42 is partnering on a €1 billion supercomputing build with iGenius, while in France it has established an AMD‑powered AI facility in Grenoble. The London operation is expected to accelerate regulatory engagement and foster partnerships tailored to the European sovereignty agenda. Analysts suggest the move is part of a broader ambition by the UAE and G42 to position themselves as credible alternatives to US and Chinese cloud and AI providers. By offering sovereign, locally governed infrastructure, G42 aims to appeal to European clients seeking secure, high‑performance computing solutions. G42's broader portfolio, including Khazna data centres, Core42 sovereign‑cloud capabilities, cybersecurity firm CPX, analytics outfit Presight and AI lab Inception, will support the London hub's operations. This integrated ecosystem underpins G42's 'Intelligence Grid', a unified platform delivering enterprise‑grade AI research, analytics and cloud services. London's strategic significance is two‑fold. As a pre‑Brexit financial and regulatory anchor, it provides a gateway to both UK and EU markets. Moreover, it enhances G42's ability to work alongside national and regional authorities in shaping next‑generation AI infrastructure and data governance frameworks. Employee responses on social media underline the positive reception. A post from G42's LinkedIn noted the subsidiary 'will drive localized AI solutions and lead infrastructure build‑outs across the UK and continental Europe,' garnering broad applauds from industry peers. Industry observers acknowledge that G42's European foothold aligns with wider trends in AI geopolitics. Investment in local compute and data sovereignty reflects growing European determination to reduce dependency on US or Chinese technology ecosystems while scaling home‑grown digital capacity.


Arabian Post
18 hours ago
- Arabian Post
G42 Stakes Claim in Europe with London AI Hub
G42 has inaugurated a London-based subsidiary, named G42 Europe & UK, marking a strategic expansion into the continent's private and public sectors. The entity aims to deliver AI-driven solutions and collaborate with governments and industry partners to erect critical AI infrastructure across continental Europe. The new operation will be co‑chaired by Omar Mir, a board member at World Wide Technology with over 20 years' experience in 5G, edge computing, cloud and AI deployments, and Marty Edelman, G42's group general counsel leading legal and compliance strategy for its global operations. G42 Europe & UK intends to localise the parent group's 'supercomputing nodes, data centres and AI capabilities' to support end‑to‑end AI services—ranging from advisory and model development to infrastructure deployment and managed services—across sectors including finance, healthcare, manufacturing and energy. ADVERTISEMENT Omar Mir stated the goal is 'to harness G42's proven AI expertise and localise it for European and UK businesses — fuelling digital transformation, enhancing competitiveness and building resilient, sovereign AI infrastructure in partnership with public and private stakeholders.' Marty Edelman described the region as a 'dynamic market with immense opportunity for AI‑driven innovation,' emphasizing London's relevance as a hub with deep regulatory understanding. This launch follows G42's broader continental push, tied to its data‑centre and high‑performance compute announcements in France and Italy, including a €1 billion supercomputing initiative with iGenius in Italy and an AMD‑powered facility in Grenoble. That wave of infrastructure expansion reflects Abu Dhabi's growing AI footprint in Europe. Analysts view the move as part of a broader UAE strategy aimed at offering European markets an alternative to leading US and Chinese AI suppliers, positioning G42 as a sovereign‑oriented provider of AI compute services. The London hub is expected to accelerate regulatory engagement and foster collaborations tailored to the region's data sovereignty agendas. G42's portfolio of specialised subsidiaries—including Khazna Data Centers, Core42 sovereign cloud, cybersecurity firm CPX, analytics arm Presight and AI lab Inception—will underpin the expansion, offering a comprehensive suite of AI infrastructure and services. Regional demand for locally governed, secure AI infrastructure continues to grow. G42's expansion is aligned with high‑profile European initiatives, such as the Italian supercomputer and AI campus in France backed by Abu Dhabi investment. London's hub will provide direct interface with European regulators and corporate partners seeking sovereign solutions outside dominant US/Chinese tech ecosystems. G42 Europe & UK is expected to engage with national and regional bodies to support regulatory frameworks and technological standards, as it rolls out next‑gen AI infrastructure. Its services aim to blend advanced technology nodes with legal and governance oversight suited to European data and digital resilience priorities. Observers suggest the London operation not only signals G42's ambition to be a key infrastructure partner in Europe, but also frames Abu Dhabi's broader economic diplomacy. With partnerships including Nvidia, AMD and Microsoft, and high‑capacity projects like Stargate UAE, G42's European hub is expected to amplify the UAE's influence in global AI development.


Tahawul Tech
2 days ago
- Tahawul Tech
'Advance AI' Archives
AMD said that many aspects of the Helios servers – such as the networking standards – would be made openly available and shared with competitors such as Intel