logo
Red Hat expands partner support for Enterprise Linux 10 release

Red Hat expands partner support for Enterprise Linux 10 release

Techday NZ21-05-2025
Red Hat has announced support from its partner ecosystem for Red Hat Enterprise Linux 10, expanding capabilities for hybrid cloud and artificial intelligence workloads.
The latest offering from Red Hat is positioned to enable organisations to manage skills needs, reduce inconsistencies, and improve application development speed. Red Hat Enterprise Linux 10 is backed by a catalogue of thousands of certified partner solutions and applications, with coverage in areas including artificial intelligence, networking, and security.
The operating system now extends across all major public cloud providers, following collaborations with Amazon Web Services, Google Cloud, and Microsoft Azure. This development delivers cloud offerings tailored to each hyperscaler environment.
Red Hat Enterprise Linux 10 introduces a fully-supported image mode, which provides independent software vendors, independent hardware vendors, and original equipment manufacturers with a container-native method for rapidly building, deploying, and managing the operating system. This approach is designed to help partners accelerate their time to market and reduce development costs for solutions across on-premises, hybrid, and edge environments.
To further support partners, Red Hat is offering partner-validated products within the Red Hat Ecosystem Catalog. These products are tested by partners themselves to confirm software and hardware compatibility with Red Hat Enterprise Linux 10. This initiative brings hundreds of certified cloud instances, software, and hardware platforms to the market for use with the operating system.
Stefanie Chiras, Senior Vice President, Partner Ecosystem Success at Red Hat, commented, "Red Hat Enterprise Linux remains the backbone of hybrid cloud innovation, and our growing partner ecosystem continues to be the catalyst for maximizing the power of Linux across on-premises, cloud and edge environments. With the innovations delivered in Red Hat Enterprise Linux 10, our partners bring critical capabilities, optimisations and expertise that allow organisations to meet the dynamic demands of AI, security and intelligent operations in the hybrid cloud."
Several key technology partners shared their perspectives on the release. Raghu Nambiar, Corporate Vice President, Data Center Ecosystems and Solutions at AMD, stated, "Our collaboration with Red Hat has been pivotal in pushing the boundaries of enterprise computing. AMD EPYC CPUs and AMD Instinct GPUs are engineered to support the advanced capabilities of Red Hat Enterprise Linux 10, enabling exceptional performance and scalability for AI, cloud, and edge workloads. We're excited to work together, creating a seamless and robust solution that empowers organisations to innovate faster and drive their business forward."
Craig Connors, Vice President and Chief Technology Officer, Cisco Security Business Group, highlighted the deployment advantages, saying, "With image mode for Red Hat Enterprise Linux 10 running on Cisco UCS X-Series managed through Cisco Intersight, customers can build a single trusted Red Hat Enterprise Linux image and roll it out securely from core datacenters to thousands of edge locations. Early adopters are cutting OS deployment times while meeting zero-trust mandates through hardware-rooted Secure Boot and SBOM attestation. Cisco and Red Hat together give enterprises a faster, safer runway for AI-driven, container-native workloads wherever they need to run."
Lauren Engebretson, Director, Compute Solutions and Enablement at HPE, underlined the solution's role at the edge, saying, "In today's race to harness AI and scale at the edge, IT organisations face a critical challenge: transforming vast data streams into instant action wherever they reside. HPE servers certified on Red Hat Enterprise Linux 10 deliver the breakthrough solution—providing not just a reliable foundation, but intelligent infrastructure that dramatically accelerates insights and response times. This powerhouse combination unlocks value across hybrid cloud environments, turning even your most remote IT into your most valuable competitive assets. The future belongs to those who can make decisions at the speed of opportunity. HPE and Red Hat help ensure you're first to seize it."
Hillery Hunter, Chief Technology Officer, IBM Infrastructure, said, "In order for enterprises to unlock their unique data with AI, they must deliberately design an AI and data stack built for enterprise scale, trust, and performance from the ground up, and that starts with Linux. IBM Cloud's open platform approach, now with support for Red Hat Enterprise Linux 10, enables clients to build and scale robust capabilities for enterprise transformation, including AI and data applications."
Mark Skarpness, Vice President, System Software Engineering, Intel, said, "Whether it's enabling workloads at the edge or propelling AI use cases, organisations require a consistent and versatile operating system to give them the flexibility and choice needed to be successful. Intel hardware accelerators and CPUs supported on Red Hat Enterprise Linux 10 help organisations fast-track innovation on a more reliable foundation, optimised for performance and security."
Scott Tease, Vice President and General Manager, Infrastructure Solutions Product Group, Lenovo, commented, "As a leading provider of Red Hat Enterprise Linux 10 certified systems, Lenovo is delivering adaptable, open source IT infrastructure that helps customers accelerate digital transformation, reduce spending and mitigate complexity through automation. With Red Hat, Lenovo offers a full, open hybrid cloud portfolio that provides customers with the right solution for today's distributed workloads and tomorrow's evolving requirements, empowering them to change the economics of the data center and grow with confidence."
John Fanelli, Vice President, Enterprise Software, NVIDIA, stated, "NVIDIA and Red Hat share a long history of collaboration to bring the world's most advanced technologies to enterprises through open platforms. Full-stack NVIDIA accelerated computing and software, paired with Red Hat Enterprise Linux 10 and supported in the NVIDIA Enterprise AI reference design, provides enterprises with a powerful foundation for leveraging AI to transform data into insights, and insights into action."
Koji Higashitani, Senior Manager, Mobile Solutions Business Division, Panasonic Connect, said, "For the most demanding remote environments, reliability and security are paramount to safeguard sensitive data and meet the stringent requirements of industries such as the federal and defence sectors. Certifying Panasonic TOUGHBOOK devices on Red Hat Enterprise Linux 10 delivers enhanced flexibility and security with durable devices on a trusted, resilient operating system foundation, meeting the highest standards of security and performance."
John Ronco, Senior Vice President, Product, SiFive, commented, "SiFive is committed to providing organisations with open, flexible and scalable RISC-V solutions and we are collaborating with Red Hat to bring the power of Red Hat Enterprise Linux 10 to the RISC-V community. The Red Hat Enterprise Linux 10 developer preview on the SiFive HiFive PremierTM P550 is designed to streamline and accelerate RISC-V innovation for the next generation of enterprise and AI applications."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Red Hat named leader for multicloud container platforms by Forrester
Red Hat named leader for multicloud container platforms by Forrester

Techday NZ

time2 days ago

  • Techday NZ

Red Hat named leader for multicloud container platforms by Forrester

Red Hat has been named a Leader in The Forrester Wave: Multicloud Container Platforms, Q3 2025 report, based on its performance in the multicloud container platform market. Forrester's assessment The Forrester Wave report evaluated several vendors in the multicloud container platform market, focusing on both the current offering and company strategy categories. Red Hat was highlighted for scoring the highest among all evaluated vendors in these categories. The report described OpenShift as "a good fit for enterprises that prioritise support, reliability, and advanced engineering, particularly in regulated industries such as financial services." It also observed that "customers consistently praise Red Hat's enterprise-grade offerings and support, especially for managed services." Forrester noted Red Hat's capabilities in Kubernetes, saying, "Red Hat excels in core Kubernetes areas, offering robust operator options, powerful management, GitOps automation, and flexible interfaces via a GUI or command-line interface (CLI). OpenShift's SLAs of 99.95% for public cloud managed-service versions showcase Red Hat's capacity to engineer capabilities beyond those of native public cloud services." The report additionally stated, "Developers will find just about everything they need with Red Hat's above-par scores in developer experience, service and application catalogues, microservices, service mesh, DevOps automation, and integration." Technical focus and AI integration Beyond container management, Red Hat is extending its efforts in hybrid cloud solutions. The company is leveraging its stack - including Red Hat Enterprise Linux - to improve support for generative AI development and operations, with an emphasis on model serving and advanced inference. Customer priorities and market needs The report noted that OpenShift has demonstrated suitability for organisations operating in highly regulated industries, such as financial services, where support and reliability are considered essential. The platform's managed services, which offer defined service-level agreements, were singled out for positive feedback from customers. The importance of a strong enterprise support model for public cloud deployments was also highlighted in the analysis. Leadership statement Mike Barrett, Vice President & General Manager, Hybrid Cloud Platforms, Red Hat, said: "Red Hat continues to provide the leading platform for organisations navigating the complexities of multicloud environments. Being named a Leader in The Forrester WaveTM for Multicloud Container Platforms reinforces our commitment to delivering robust, enterprise-grade solutions that empower our customers to innovate with confidence across their hybrid cloud footprints. Our focus on core Kubernetes capabilities, strong developer experience and strategic AI integrations positions us well for the evolving needs of the market. Sovereign cloud, coupled with the digital independence required to get the most from AI, have made multicloud investments a leading priority for our global customers." Developer perspective The Forrester evaluation recognised Red Hat's OpenShift for the breadth of its support for developers, including tooling for DevOps automation, service catalogues, and integration features. The solution was described as delivering above-average scores in developer experience, microservices, and service mesh capabilities. Market context As enterprise IT organisations continue to adopt hybrid and multicloud strategies, platforms capable of delivering consistent operations and supporting evolving application needs are increasingly important. The 99.95% public cloud managed service SLA cited by Forrester underlines the attention to reliability and service continuity expected in this sector. Red Hat continues to broaden the reach of its hybrid cloud portfolio, applying the foundation of Red Hat Enterprise Linux to support both traditional enterprise workloads and emerging technologies such as generative AI.

AMD brings 128B LLMs to Windows PCs with Ryzen AI Max+ 395
AMD brings 128B LLMs to Windows PCs with Ryzen AI Max+ 395

Techday NZ

time2 days ago

  • Techday NZ

AMD brings 128B LLMs to Windows PCs with Ryzen AI Max+ 395

AMD has announced a free software update enabling 128 billion parameter Large Language Models (LLMs) to be run locally on Windows PCs powered by AMD Ryzen AI Max+ 395 128GB processors, a capability previously only accessible through cloud infrastructure. With this update, AMD is allowing users to access and deploy advanced AI models locally, bypassing the need for third-party infrastructure, which can provide greater control, lower ongoing costs, and improved privacy. The company says this shift addresses growing demand for scalable and private AI processing at the client device level. Previously, models of this scale, such as those approaching the size of ChatGPT 3.0, were operable only within large-scale data centres. The new functionality comes through an upgrade to AMD Variable Graphics Memory, included with the upcoming Adrenalin Edition 25.8.1 WHQL drivers. This upgrade leverages the 96GB Variable Graphics Memory available on the Ryzen AI Max+ 395 128GB machine, supporting the execution of memory-intensive LLM workloads directly on Windows PCs. A broader deployment This update also marks the AMD Ryzen AI Max+ 395 (128GB) as the first Windows AI PC processor to run Meta's Llama 4 Scout 109B model - specifically with full vision and multi-call processing (MCP) support. The processor can manage all 109 billion parameters in memory, although the mixture-of-experts (MoE) architecture means only 17 billion parameters are active at any given time. The company reports output rates of up to 15 tokens per second for this model. According to AMD, the ability to handle such large models locally is important for users who require high-capacity AI assistants on-the-go. The system also supports flexible quantisation and can facilitate a range of LLMs, from compact 1B parameter models to Mistral Large, using the GGUF format. This isn't just about bringing cloud-scale compute to the desktop; it's about expanding the range of options for how AI can be used, built, and deployed locally. The company further states that performance in MoE models like Llama 4 Scout correlates with the number of active parameters, while dense models depend on the total parameter count. The memory capacity of the AMD Ryzen AI Max+ platform allows users to opt for higher-precision models, supporting up to 16-bit models through when trade-offs between quality and performance are warranted. Context and workflow AMD also highlights the importance of context size when working with LLMs. The AMD Ryzen AI Max+ 395 (128GB), equipped with the new driver, can run Meta's Llama 4 Scout at a context length of 256,000 (with Flash Attention ON and KV Cache Q8), significantly exceeding the standard 4,096 token window default in many applications. Examples provided include demonstrations where an LLM summarises extensive documents, such as an SEC EDGAR filing, requiring over 19,000 tokens to be held in context. Another example cited the summarisation of a research paper from the ARXIV database, needing more than 21,000 tokens from query initiation to final output. AMD notes that more complex workflows might require even greater context capacity, particularly for multi-tool and agentic scenarios. AMD states that while occasional users may manage with a context length of 32,000 tokens and a lightweight model, more demanding use cases will benefit from hardware and software that support expansive contexts, as offered by the AMD Ryzen AI Max+ 395 128GB. Looking ahead, AMD points to an expanding set of agentic workflows as LLMs and AI agents become more widely adopted for local inferencing. Industry trends indicate that model developers, including Meta, Google, and Mistral, are increasingly integrating tool-calling capabilities into their training runs to facilitate local personal assistant use cases. AMD also provides guidance on maintaining caution when enabling tool access for large language models, noting the potential for unpredictable system behaviour and outcomes. Users are advised to install LLM implementations only from trusted sources. The AMD Ryzen AI Max+ 395 (128GB) is now positioned to support most models available through and other tools, offering flexible deployment and model selection options for users with high-performance local AI requirements.

How private LLMs are delivering real business benefits
How private LLMs are delivering real business benefits

Techday NZ

time3 days ago

  • Techday NZ

How private LLMs are delivering real business benefits

While many organisations remain focused on experimenting with public AI platforms, a growing number are discovering that the real value of AI doesn't always require starting from scratch. Instead, they're finding success by putting to use capabilities that already exist within widely adopted platforms. From Microsoft 365 to Adobe's creative suite and cloud-based ecosystems like Salesforce, AI features are now embedded across enterprise applications. These out-of-the-box tools can streamline workflows, automate repetitive tasks, and enhance productivity without the need for costly overhauls. However, a true AI-related game changer - particularly for organisations concerned about data sovereignty and privacy - lies in private Large Language Models (LLMs). The rise of private LLMs A private LLM is an AI system that operates entirely within the boundaries of an organisation's secure digital environment. Unlike public LLMs, which rely on broad web-based datasets and internet connectivity, private models are trained exclusively on internal data and do not share information externally. These models can be deployed on-premises or via secure cloud platforms such as Microsoft Azure or Amazon Web Services (AWS). The advantage is that they bring the power of generative AI directly to the fingertips of employees, without compromising sensitive information. Consider the example of uploading internal policy documents, technical manuals, or sales resources into a private LLM. Rather than spending hours combing through shared drives or intranet pages, staff can pose a simple natural language question and receive an accurate, context-aware answer in seconds. Transforming the way knowledge is accessed This transformation is already taking shape across a range of sectors. In law firms for example, where navigating vast collections of case law and legal precedents is a daily necessity, private LLMs allow legal professionals to locate relevant rulings or procedural guidance with remarkable speed. By reducing research time, firms can improve both client responsiveness and billable efficiency. Similarly, contact centres are embracing private LLMs to enhance customer service. Agents can submit real-time queries on behalf of clients and receive detailed, relevant answers almost instantly. Some AI systems can even listen in on conversations and proactively surface documents or information that might help resolve a query, eliminating the need for manual lookups altogether. Fine-tuning for precision and context While the promise of private LLMs is significant, getting the most out of them may require a degree of preparation as organisations may need to "tidy up" their data inputs. This might mean updating documents and titles to better reflect the content's purpose and intent. These changes will help the LLM to quickly and correctly identify and contextualise materials. Also, models may need to be trained on company-specific jargon, abbreviations, or industry terminology to reduce ambiguity and ensure accurate outputs. While not as intensive as training a model from scratch, these adjustments are crucial for maximising performance. A security-first approach For many senior executives, particularly in regulated industries, concerns about data security have been a roadblock to broader AI adoption. Public AI tools like ChatGPT raise the risk of confidential information leaking into external systems, either inadvertently or through user error. Private LLMs, by design, mitigate this risk. Because the model operates within an organisation's controlled infrastructure, data remains protected. Nothing is shared with third parties, and compliance with data governance policies can be maintained. This secure-by-design feature makes private LLMs not just a convenience, but a strategic imperative for companies handling sensitive information, be it legal, financial, or personal. Education is key to adoption As with any transformative technology, successful implementation doesn't end with the technical rollout. Employee education plays a critical role in ensuring that AI-enhanced applications are used safely and effectively. Staff need to understand not only how to use these tools but also the boundaries. They need to know what information can be entered, how data is stored, and why private models are different from their public counterparts. Importantly, organisations must emphasise the dangers of uploading proprietary data into public AI systems, which may retain or reuse that information in unintended ways. A single lapse in judgment can have serious consequences. As generative AI continues to mature, organisations face a crucial decision: chase the hype or focus on meaningful, secure, and sustainable value. Private LLMs may lack the flashiness of public AI demos, but they are quietly becoming indispensable tools for knowledge-intensive businesses. By leveraging internal data, respecting privacy boundaries, and empowering staff through intelligent interfaces, companies are turning their own information into a competitive asset.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store