logo
AMD unveils new Radeon, Ryzen & AI PC innovations at Computex

AMD unveils new Radeon, Ryzen & AI PC innovations at Computex

Techday NZ21-05-2025

AMD has announced a series of updates to its product portfolio, introducing new entries to the Ryzen and Radeon lines as well as developments centred on AI-powered PCs.
At Computex 2025 in Taipei, the company presented the Radeon RX 9060 XT graphics cards, the Radeon AI PRO R9700 workstation graphics cards, and the Ryzen Threadripper 9000 Series and 9000 WX-Series processors. AMD executives outlined how these releases are positioned for gaming, professional workstations, and AI development.
The Radeon RX 9060 XT graphics cards, based on the AMD RDNA 4 architecture, will be available with either 8GB or 16GB of GDDR6 memory. According to AMD, these units deliver double the raytracing throughput of the previous generation and are targeting smooth 1440p gaming experiences. The 8GB model will start at USD $299 and the 16GB version at USD $349, with availability from board partners expected later in the year.
Jack Huynh, Senior Vice President and General Manager, Computing and Graphics Group at AMD, commented on the scale of the product introductions, stating, "These announcements underscore our commitment to continue delivering industry-leading innovation across our product portfolio. The Radeon RX 9060 XT and Radeon AI PRO R9700 bring the performance and AI capabilities of RDNA 4 to workstations and gamers all around the world, while our new Ryzen Threadripper 9000 Series sets the new standard for high-end desktops and professional workstations. Together, these solutions represent our vision for empowering creators, gamers, and professionals with the performance and efficiency to push boundaries and drive creativity."
The Radeon RX 9060 XT, designed for demanding gaming environments, features 32 RDNA 4 compute units. AMD reports that this model supports accelerated raytracing, enhanced by its increased throughput and FidelityFX Super Resolution 4 (FSR 4) machine learning upscaling technology. FSR 4 has been developed to raise both frame rates and visual fidelity in all rendering conditions.
AMD's newly launched Radeon AI PRO R9700 GPU is designed for professional AI development and workstation tasks. With 32GB of memory, 64 compute units, and PCIe Gen 5 support, the graphics card is aimed at data-heavy workflows such as local AI inference, model finetuning, and scalable compute in multi-GPU configurations. The company claims the second-generation AI accelerators in this card offer up to twice the throughput of the previous generation.
Availability for the Radeon AI PRO R9700 is set for July 2025, with AMD indicating ongoing efforts to expand high-performance GPU acceleration to more AI and compute workloads through expanded AMD ROCm on Radeon support.
The Ryzen Threadripper 9000 Series and 9000 WX-Series processors form the latest chapter in AMD's workstation strategy. These chips make use of the Zen 5 architecture and support record-setting core counts, including the Ryzen Threadripper PRO 9995WX which houses 96 cores and 192 threads. The processors offer up to 384MB of L3 cache and 128 PCIe 5.0 lanes, features that are oriented towards resource-intensive scenarios like VFX rendering, physics simulation, and large-scale AI model development. Enterprise-grade AMD PRO Technologies are integrated to enhance security, manageability, and platform stability.
System integrators and major manufacturers including Dell, HP, Lenovo and Supermicro are expected to offer products equipped with the new Ryzen Threadripper PRO 9000 WX-Series processors later this year. DIY and retail platforms for the 9000 Series are scheduled to follow in July 2025.
AMD is also continuing its partnership approach in the AI PC segment. One element of this is the new ASUS Expert P Series Copilot+ PCs, which are powered by up to AMD Ryzen AI PRO 300 Series processors boasting over 50 TOPS of NPU performance. These units are aimed at providing fast AI-enhanced productivity, enterprise security, and manageability for corporate environments.
S.Y. Hsu, Co-CEO of ASUS, stated, "We're proud to deepen our collaboration with AMD as we usher in a new era of AI-powered computing. With the addition of the new Expert series — built from the ground up to revolutionise performance and efficiency for the modern workplace — to our broad AI PC portfolio, and commitment to innovation, we aim to deliver next-gen AI experiences that empower users everywhere."
Luca Rossi, President, Intelligent Devices Group, Lenovo, added, "At Lenovo, we're committed to delivering AI PCs that are not only powerful, but truly personal and productive. Our long-standing collaboration with AMD continues to drive this vision forward — from high-performance laptops to innovative workstations. Together, we're enabling faster, smarter computing experiences for every kind of user. We're especially excited about what's coming next in our ThinkStation P8 workstation, where AMD's latest high-performance Ryzen Threadripper PRO processors will unlock new possibilities for creators and professionals alike."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

HostPapa launches dedicated server hosting with five tiers
HostPapa launches dedicated server hosting with five tiers

Techday NZ

time4 days ago

  • Techday NZ

HostPapa launches dedicated server hosting with five tiers

HostPapa has expanded its product portfolio to include Dedicated Server Hosting, aiming at businesses with advanced hosting requirements. The new Dedicated Server Hosting service comes in five performance tiers, with each offering a range of hardware configurations intended to support more demanding workloads. Businesses can choose from managed or unmanaged server options, both designed to deliver on performance, security, and reliability. Jamie Opalchuk, Founder and Chief Executive Officer of HostPapa, stated, "HostPapa is known for our powerful hosting options with top-tier customer support. Our new Dedicated Server Hosting plans offer affordable performance and high customer control that was previously out of reach for many small and medium-sized businesses." Hardware featured in the dedicated server plans includes high-performance Intel and AMD processors, NVMe storage, broad bandwidth allocations, and memory options up to 256GB. The managed dedicated server plans include cPanel access, enhanced security functionality, and technical support, while unmanaged options provide customers with maximum configuration flexibility. Corey Hammond, Chief Marketing Officer at HostPapa, commented, "Our product team has carefully designed these plans for businesses that have outgrown traditional shared or VPS hosting. Whether it's hosting high-traffic websites, running data-intensive applications, or supporting resource-intensive business solutions, our dedicated servers deliver the performance, security, and reliability businesses need to succeed." The Dedicated Server Hosting plans are supported by HostPapa's 30-day money-back guarantee and offer comprehensive, round-the-clock assistance via the PapaSquad customer support team. Additional benefits for managed server customers include advanced server control panels, security enhancements such as SSL certificates, and several backup storage configurations. The managed hosting tier is intended for businesses seeking to outsource server upkeep, security updates, and performance management. Marian Onofrei, Vice President of Operations, said, "For managed server customers, we handle the bulk of server maintenance, security updates, and performance optimisation, allowing businesses to focus on their core operations rather than IT management. For unmanaged customers who prefer greater control, our team is still available to provide guidance and support whenever needed." The new Dedicated Server Hosting plans have been made available to businesses looking for hosting solutions that support high-traffic or resource-intensive operations. HostPapa, founded in 2006 and based in Burlington, Ontario, Canada, is a web hosting and cloud services provider for small businesses around the globe. HostPapa is committed to providing a complete array of enterprise-grade solutions to every business owner. These services, traditionally out of reach to smaller businesses, are offered in a one-stop shop, making it quick and easy for customers to select the services they need to grow. HostPapa backs these offerings with around the clock multilingual customer support provided by a team of experts.

Red Hat expands partner support for Enterprise Linux 10 release
Red Hat expands partner support for Enterprise Linux 10 release

Techday NZ

time21-05-2025

  • Techday NZ

Red Hat expands partner support for Enterprise Linux 10 release

Red Hat has announced support from its partner ecosystem for Red Hat Enterprise Linux 10, expanding capabilities for hybrid cloud and artificial intelligence workloads. The latest offering from Red Hat is positioned to enable organisations to manage skills needs, reduce inconsistencies, and improve application development speed. Red Hat Enterprise Linux 10 is backed by a catalogue of thousands of certified partner solutions and applications, with coverage in areas including artificial intelligence, networking, and security. The operating system now extends across all major public cloud providers, following collaborations with Amazon Web Services, Google Cloud, and Microsoft Azure. This development delivers cloud offerings tailored to each hyperscaler environment. Red Hat Enterprise Linux 10 introduces a fully-supported image mode, which provides independent software vendors, independent hardware vendors, and original equipment manufacturers with a container-native method for rapidly building, deploying, and managing the operating system. This approach is designed to help partners accelerate their time to market and reduce development costs for solutions across on-premises, hybrid, and edge environments. To further support partners, Red Hat is offering partner-validated products within the Red Hat Ecosystem Catalog. These products are tested by partners themselves to confirm software and hardware compatibility with Red Hat Enterprise Linux 10. This initiative brings hundreds of certified cloud instances, software, and hardware platforms to the market for use with the operating system. Stefanie Chiras, Senior Vice President, Partner Ecosystem Success at Red Hat, commented, "Red Hat Enterprise Linux remains the backbone of hybrid cloud innovation, and our growing partner ecosystem continues to be the catalyst for maximizing the power of Linux across on-premises, cloud and edge environments. With the innovations delivered in Red Hat Enterprise Linux 10, our partners bring critical capabilities, optimisations and expertise that allow organisations to meet the dynamic demands of AI, security and intelligent operations in the hybrid cloud." Several key technology partners shared their perspectives on the release. Raghu Nambiar, Corporate Vice President, Data Center Ecosystems and Solutions at AMD, stated, "Our collaboration with Red Hat has been pivotal in pushing the boundaries of enterprise computing. AMD EPYC CPUs and AMD Instinct GPUs are engineered to support the advanced capabilities of Red Hat Enterprise Linux 10, enabling exceptional performance and scalability for AI, cloud, and edge workloads. We're excited to work together, creating a seamless and robust solution that empowers organisations to innovate faster and drive their business forward." Craig Connors, Vice President and Chief Technology Officer, Cisco Security Business Group, highlighted the deployment advantages, saying, "With image mode for Red Hat Enterprise Linux 10 running on Cisco UCS X-Series managed through Cisco Intersight, customers can build a single trusted Red Hat Enterprise Linux image and roll it out securely from core datacenters to thousands of edge locations. Early adopters are cutting OS deployment times while meeting zero-trust mandates through hardware-rooted Secure Boot and SBOM attestation. Cisco and Red Hat together give enterprises a faster, safer runway for AI-driven, container-native workloads wherever they need to run." Lauren Engebretson, Director, Compute Solutions and Enablement at HPE, underlined the solution's role at the edge, saying, "In today's race to harness AI and scale at the edge, IT organisations face a critical challenge: transforming vast data streams into instant action wherever they reside. HPE servers certified on Red Hat Enterprise Linux 10 deliver the breakthrough solution—providing not just a reliable foundation, but intelligent infrastructure that dramatically accelerates insights and response times. This powerhouse combination unlocks value across hybrid cloud environments, turning even your most remote IT into your most valuable competitive assets. The future belongs to those who can make decisions at the speed of opportunity. HPE and Red Hat help ensure you're first to seize it." Hillery Hunter, Chief Technology Officer, IBM Infrastructure, said, "In order for enterprises to unlock their unique data with AI, they must deliberately design an AI and data stack built for enterprise scale, trust, and performance from the ground up, and that starts with Linux. IBM Cloud's open platform approach, now with support for Red Hat Enterprise Linux 10, enables clients to build and scale robust capabilities for enterprise transformation, including AI and data applications." Mark Skarpness, Vice President, System Software Engineering, Intel, said, "Whether it's enabling workloads at the edge or propelling AI use cases, organisations require a consistent and versatile operating system to give them the flexibility and choice needed to be successful. Intel hardware accelerators and CPUs supported on Red Hat Enterprise Linux 10 help organisations fast-track innovation on a more reliable foundation, optimised for performance and security." Scott Tease, Vice President and General Manager, Infrastructure Solutions Product Group, Lenovo, commented, "As a leading provider of Red Hat Enterprise Linux 10 certified systems, Lenovo is delivering adaptable, open source IT infrastructure that helps customers accelerate digital transformation, reduce spending and mitigate complexity through automation. With Red Hat, Lenovo offers a full, open hybrid cloud portfolio that provides customers with the right solution for today's distributed workloads and tomorrow's evolving requirements, empowering them to change the economics of the data center and grow with confidence." John Fanelli, Vice President, Enterprise Software, NVIDIA, stated, "NVIDIA and Red Hat share a long history of collaboration to bring the world's most advanced technologies to enterprises through open platforms. Full-stack NVIDIA accelerated computing and software, paired with Red Hat Enterprise Linux 10 and supported in the NVIDIA Enterprise AI reference design, provides enterprises with a powerful foundation for leveraging AI to transform data into insights, and insights into action." Koji Higashitani, Senior Manager, Mobile Solutions Business Division, Panasonic Connect, said, "For the most demanding remote environments, reliability and security are paramount to safeguard sensitive data and meet the stringent requirements of industries such as the federal and defence sectors. Certifying Panasonic TOUGHBOOK devices on Red Hat Enterprise Linux 10 delivers enhanced flexibility and security with durable devices on a trusted, resilient operating system foundation, meeting the highest standards of security and performance." John Ronco, Senior Vice President, Product, SiFive, commented, "SiFive is committed to providing organisations with open, flexible and scalable RISC-V solutions and we are collaborating with Red Hat to bring the power of Red Hat Enterprise Linux 10 to the RISC-V community. The Red Hat Enterprise Linux 10 developer preview on the SiFive HiFive PremierTM P550 is designed to streamline and accelerate RISC-V innovation for the next generation of enterprise and AI applications."

Red Hat leads launch of llm-d to scale generative AI in clouds
Red Hat leads launch of llm-d to scale generative AI in clouds

Techday NZ

time21-05-2025

  • Techday NZ

Red Hat leads launch of llm-d to scale generative AI in clouds

Red Hat has introduced llm-d, an open source project aimed at enabling large-scale distributed generative AI inference across hybrid cloud environments. The llm-d initiative is the result of collaboration between Red Hat and a group of founding contributors comprising CoreWeave, Google Cloud, IBM Research and NVIDIA, with additional support from AMD, Cisco, Hugging Face, Intel, Lambda, Mistral AI, and academic partners from the University of California, Berkeley, and the University of Chicago. The new project utilises vLLM-based distributed inference, a native Kubernetes architecture, and AI-aware network routing to facilitate robust and scalable AI inference clouds that can meet demanding production service-level objectives. Red Hat asserts that this will support any AI model, on any hardware accelerator, in any cloud environment. Brian Stevens, Senior Vice President and AI CTO at Red Hat, stated, "The launch of the llm-d community, backed by a vanguard of AI leaders, marks a pivotal moment in addressing the need for scalable gen AI inference, a crucial obstacle that must be overcome to enable broader enterprise AI adoption. By tapping the innovation of vLLM and the proven capabilities of Kubernetes, llm-d paves the way for distributed, scalable and high-performing AI inference across the expanded hybrid cloud, supporting any model, any accelerator, on any cloud environment and helping realize a vision of limitless AI potential." Addressing the scaling needs of generative AI, Red Hat points to a Gartner forecast that suggests by 2028, more than 80% of data centre workload accelerators will be principally deployed for inference rather than model training. This projected shift highlights the necessity for efficient and scalable inference solutions as AI models become larger and more complex. The llm-d project's architecture is designed to overcome the practical limitations of centralised AI inference, such as prohibitive costs and latency. Its main features include vLLM for rapid model support, Prefill and Decode Disaggregation for distributing computational workloads, KV Cache Offloading based on LMCache to shift memory loads onto standard storage, and AI-Aware Network Routing for optimised request scheduling. Further, the project supports Google Cloud's Tensor Processing Units and NVIDIA's Inference Xfer Library for high-performance data transfer. The community formed around llm-d comprises both technology vendors and academic institutions. Each wants to address efficiency, cost, and performance at scale for AI-powered applications. Several of these partners provided statements regarding their involvement and the intended impact of the project. Ramine Roane, Corporate Vice President, AI Product Management at AMD, said, "AMD is proud to be a founding member of the llm-d community, contributing our expertise in high-performance GPUs to advance AI inference for evolving enterprise AI needs. As organisations navigate the increasing complexity of generative AI to achieve greater scale and efficiency, AMD looks forward to meeting this industry demand through the llm-d project." Shannon McFarland, Vice President, Cisco Open Source Program Office & Head of Cisco DevNet, remarked, "The llm-d project is an exciting step forward for practical generative AI. llm-d empowers developers to programmatically integrate and scale generative AI inference, unlocking new levels of innovation and efficiency in the modern AI landscape. Cisco is proud to be part of the llm-d community, where we're working together to explore real-world use cases that help organisations apply AI more effectively and efficiently." Chen Goldberg, Senior Vice President, Engineering, CoreWeave, commented, "CoreWeave is proud to be a founding contributor to the llm-d project and to deepen our long-standing commitment to open source AI. From our early partnership with EleutherAI to our ongoing work advancing inference at scale, we've consistently invested in making powerful AI infrastructure more accessible. We're excited to collaborate with an incredible group of partners and the broader developer community to build a flexible, high-performance inference engine that accelerates innovation and lays the groundwork for open, interoperable AI." Mark Lohmeyer, Vice President and General Manager, AI & Computing Infrastructure, Google Cloud, stated, "Efficient AI inference is paramount as organisations move to deploying AI at scale and deliver value for their users. As we enter this new age of inference, Google Cloud is proud to build upon our legacy of open source contributions as a founding contributor to the llm-d project. This new community will serve as a critical catalyst for distributed AI inference at scale, helping users realise enhanced workload efficiency with increased optionality for their infrastructure resources." Jeff Boudier, Head of Product, Hugging Face, said, "We believe every company should be able to build and run their own models. With vLLM leveraging the Hugging Face transformers library as the source of truth for model definitions; a wide diversity of models large and small is available to power text, audio, image and video AI applications. Eight million AI Builders use Hugging Face to collaborate on over two million AI models and datasets openly shared with the global community. We are excited to support the llm-d project to enable developers to take these applications to scale." Priya Nagpurkar, Vice President, Hybrid Cloud and AI Platform, IBM Research, commented, "At IBM, we believe the next phase of AI is about efficiency and scale. We're focused on unlocking value for enterprises through AI solutions they can deploy effectively. As a founding contributor to llm-d, IBM is proud to be a key part of building a differentiated hardware agnostic distributed AI inference platform. We're looking forward to continued contributions towards the growth and success of this community to transform the future of AI inference." Bill Pearson, Vice President, Data Center & AI Software Solutions and Ecosystem, Intel, said, "The launch of llm-d will serve as a key inflection point for the industry in driving AI transformation at scale, and Intel is excited to participate as a founding supporter. Intel's involvement with llm-d is the latest milestone in our decades-long collaboration with Red Hat to empower enterprises with open source solutions that they can deploy anywhere, on their platform of choice. We look forward to further extending and building AI innovation through the llm-d community." Eve Callicoat, Senior Staff Engineer, ML Platform, Lambda, commented, "Inference is where the real-world value of AI is delivered, and llm-d represents a major leap forward. Lambda is proud to support a project that makes state-of-the-art inference accessible, efficient, and open." Ujval Kapasi, Vice President, Engineering AI Frameworks, NVIDIA, stated, "The llm-d project is an important addition to the open source AI ecosystem and reflects NVIDIA's support for collaboration to drive innovation in generative AI. Scalable, highly performant inference is key to the next wave of generative and agentic AI. We're working with Red Hat and other supporting partners to foster llm-d community engagement and industry adoption, helping accelerate llm-d with innovations from NVIDIA Dynamo such as NIXL." Ion Stoica, Professor and Director of Sky Computing Lab, University of California, Berkeley, remarked, "We are pleased to see Red Hat build upon the established success of vLLM, which originated in our lab to help address the speed and memory challenges that come with running large AI models. Open source projects like vLLM, and now llm-d anchored in vLLM, are at the frontier of AI innovation tackling the most demanding AI inference requirements and moving the needle for the industry at large." Junchen Jiang, Professor at the LMCache Lab, University of Chicago, added, "Distributed KV cache optimisations, such as offloading, compression, and blending, have been a key focus of our lab, and we are excited to see llm-d leveraging LMCache as a core component to reduce time to first token as well as improve throughput, particularly in long-context inference."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store