logo
#

Latest news with #RedHatAIInferenceServer

Red Hat Unlocks Generative AI for Any Model and Any Accelerator Across the Hybrid Cloud with Red Hat AI Inference Server
Red Hat Unlocks Generative AI for Any Model and Any Accelerator Across the Hybrid Cloud with Red Hat AI Inference Server

Mid East Info

time25-05-2025

  • Business
  • Mid East Info

Red Hat Unlocks Generative AI for Any Model and Any Accelerator Across the Hybrid Cloud with Red Hat AI Inference Server

Red Hat AI Inference Server, powered by vLLM and enhanced with Neural Magic technologies, delivers faster, higher-performing and more cost-efficient AI inference across the hybrid cloud BOSTON – RED HAT SUMMIT – MAY, 2025 — Red Hat, the world's leading provider of open source solutions, announced Red Hat AI Inference Server, a significant step towards democratizing generative AI (gen AI) across the hybrid cloud. A new offering within Red Hat AI, the enterprise-grade inference server is born from the powerful vLLM community project and enhanced by Red Hat's integration of Neural Magic technologies, offering greater speed, accelerator-efficiency and cost-effectiveness to help deliver Red Hat's vision of running any gen AI model on any AI accelerator in any cloud environment. Whether deployed standalone or as an integrated component of Red Hat Enterprise Linux AI (RHEL AI) and Red Hat OpenShift AI, this breakthrough platform empowers organizations to more confidently deploy and scale gen AI in production. Inference is the critical execution engine of AI, where pre-trained models translate data into real-world impact. It's the pivotal point of user interaction, demanding swift and accurate responses. As gen AI models explode in complexity and production deployments scale, inference can become a significant bottleneck, devouring hardware resources and threatening to cripple responsiveness and inflate operational costs. Robust inference servers are no longer a luxury, but a necessity for unlocking the true potential of AI at scale, navigating underlying complexities with greater ease. Red Hat directly addresses these challenges with Red Hat AI Inference Server — an open inference solution engineered for high performance and equipped with leading model compression and optimization tools. This innovation empowers organizations to fully tap into the transformative power of gen AI by delivering dramatically more responsive user experiences and unparalleled freedom in their choice of AI accelerators, models and IT environments. vLLM: Extending inference innovation: Red Hat AI Inference Server builds on the industry-leading vLLM project, which was started by University of California, Berkeley in mid-2023. The community project delivers high-throughput gen AI inference, support for large input context, multi-GPU model acceleration, support for continuous batching and more. vLLM's broad support for publicly available models – coupled with its day zero integration of leading frontier models including DeepSeek, Gemma, Llama, Llama Nemotron, Mistral, Phi and others, as well as open, enterprise-grade reasoning models like Llama Nemotron – positions it as a de facto standard for future AI inference innovation. Leading frontier model providers are increasingly embracing vLLM, solidifying its critical role in shaping gen AI's future. Introducing Red Hat AI Inference Server: Red Hat AI Inference Server packages the leading innovation of vLLM and forges it into the enterprise-grade capabilities of Red Hat AI Inference Server. Red Hat AI Inference Server is available as a standalone containerized offering or as part of both RHEL AI and Red Hat OpenShift AI. Across any deployment environment, Red Hat AI Inference Server provides users with a hardened, supported distribution of vLLM, along with: Intelligent LLM compression tools for dramatically reducing the size of both foundational and fine-tuned AI models, minimizing compute consumption while preserving and potentially enhancing model accuracy. Optimized model repository, hosted in the Red Hat AI organization on Hugging Face, offers instant access to a validated and optimized collection of leading AI models ready for inference deployment, helping to accelerate efficiency by 2-4x without compromising model accuracy. Red Hat's enterprise support and decades of expertise in bringing community projects to production environments. Third-party support for even greater deployment flexibility, enabling Red Hat AI Inference Server to be deployed on non-Red Hat Linux and Kubernetes platforms pursuant to Red Hat's third-party support policy. Red Hat's vision: Any model, any accelerator, any cloud. The future of AI must be defined by limitless opportunity, not constrained by infrastructure silos. Red Hat sees a horizon where organizations can deploy any model, on any accelerator, across any cloud, delivering an exceptional, more consistent user experience without exorbitant costs. To unlock the true potential of gen AI investments, enterprises require a universal inference platform – a standard for more seamless, high-performance AI innovation, both today and in the years to come. Just as Red Hat pioneered the open enterprise by transforming Linux into the bedrock of modern IT, the company is now poised to architect the future of AI inference. vLLM's potential is that of a linchpin for standardized gen AI inference, and Red Hat is committed to building a thriving ecosystem around not just the vLLM community but also llm-d for distributed inference at scale. The vision is clear: regardless of the AI model, the underlying accelerator or the deployment environment, Red Hat intends to make vLLM the definitive open standard for inference across the new hybrid cloud. Red Hat Summit: Join the Red Hat Summit keynotes to hear the latest from Red Hat executives, customers and partners: Modernized infrastructure meets enterprise-ready AI — Tuesday, May 20, 8-10 a.m. EDT (YouTube) Hybrid cloud evolves to deliver enterprise innovation — Wednesday, May 21, 8-9:30 a.m. EDT (YouTube) Supporting Quotes: Joe Fernandes, vice president and general manager, AI Business Unit, Red Hat 'Inference is where the real promise of gen AI is delivered, where user interactions are met with fast, accurate responses delivered by a given model, but it must be delivered in an effective and cost-efficient way. Red Hat AI Inference Server is intended to meet the demand for high-performing, responsive inference at scale while keeping resource demands low, providing a common inference layer that supports any model, running on any accelerator in any environment.' Ramine Roane, corporate vice president, AI Product Management, AMD 'In collaboration with Red Hat, AMD delivers out-of-the-box solutions to drive efficient generative AI in the enterprise. Red Hat AI Inference Server enabled on AMD Instinct™ GPUs equips organizations with enterprise-grade, community-driven AI inference capabilities backed by fully validated hardware accelerators.' Jeremy Foster, senior vice president and general manager, Cisco 'AI workloads need speed, consistency, and flexibility, which is exactly what the Red Hat AI Inference Server is designed to deliver. This innovation offers Cisco and Red Hat opportunities to continue to collaborate on new ways to make AI deployments more accessible, efficient and scalable—helping organizations prepare for what's next.' Bill Pearson, vice president, Data Center & AI Software Solutions and Ecosystem, Intel 'Intel is excited to collaborate with Red Hat to enable Red Hat AI Inference Server on Intel® Gaudi® accelerators. This integration will provide our customers with an optimized solution to streamline and scale AI inference, delivering advanced performance and efficiency for a wide range of enterprise AI applications.' John Fanelli, vice president, Enterprise Software, NVIDIA 'High-performance inference enables models and AI agents not just to answer, but to reason and adapt in real time. With open, full-stack NVIDIA accelerated computing and Red Hat AI Inference Server, developers can run efficient reasoning at scale across hybrid clouds, and deploy with confidence using Red Hat Inference Server with the new NVIDIA Enterprise AI validated design.' About Red Hat: Red Hat is the world's leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future. Forward-Looking Statements: Except for the historical information and discussions contained herein, statements contained in this press release may constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are based on the company's current assumptions regarding future business and financial performance. These statements involve a number of risks, uncertainties and other factors that could cause actual results to differ materially. Any forward-looking statement in this press release speaks only as of the date on which it is made. Except as required by law, the company assumes no obligation to update or revise any forward-looking statements.

Red Hat Unlocks Generative AI for Any Model and Any Accelerator Across the Hybrid Cloud with Red Hat AI Inference Server
Red Hat Unlocks Generative AI for Any Model and Any Accelerator Across the Hybrid Cloud with Red Hat AI Inference Server

Web Release

time25-05-2025

  • Business
  • Web Release

Red Hat Unlocks Generative AI for Any Model and Any Accelerator Across the Hybrid Cloud with Red Hat AI Inference Server

Red Hat Unlocks Generative AI for Any Model and Any Accelerator Across the Hybrid Cloud with Red Hat AI Inference Server Red Hat, the world's leading provider of open source solutions, announced Red Hat AI Inference Server, a significant step towards democratizing generative AI (gen AI) across the hybrid cloud. A new offering within Red Hat AI, the enterprise-grade inference server is born from the powerful vLLM community project and enhanced by Red Hat's integration of Neural Magic technologies, offering greater speed, accelerator-efficiency and cost-effectiveness to help deliver Red Hat's vision of running any gen AI model on any AI accelerator in any cloud environment. Whether deployed standalone or as an integrated component of Red Hat Enterprise Linux AI (RHEL AI) and Red Hat OpenShift AI, this breakthrough platform empowers organizations to more confidently deploy and scale gen AI in production. Inference is the critical execution engine of AI, where pre-trained models translate data into real-world impact. It's the pivotal point of user interaction, demanding swift and accurate responses. As gen AI models explode in complexity and production deployments scale, inference can become a significant bottleneck, devouring hardware resources and threatening to cripple responsiveness and inflate operational costs. Robust inference servers are no longer a luxury, but a necessity for unlocking the true potential of AI at scale, navigating underlying complexities with greater ease. Red Hat directly addresses these challenges with Red Hat AI Inference Server — an open inference solution engineered for high performance and equipped with leading model compression and optimization tools. This innovation empowers organizations to fully tap into the transformative power of gen AI by delivering dramatically more responsive user experiences and unparalleled freedom in their choice of AI accelerators, models and IT environments. LLM: Extending inference innovation Red Hat AI Inference Server builds on the industry-leading vLLM project, which was started by University of California, Berkeley in mid-2023. The community project delivers high-throughput gen AI inference, support for large input context, multi-GPU model acceleration, support for continuous batching and more. LLM's broad support for publicly available models – coupled with its day zero integration of leading frontier models including DeepSeek, Gemma, Llama, Llama Nemotron, Mistral, Phi and others, as well as open, enterprise-grade reasoning models like Llama Nemotron – positions it as a de facto standard for future AI inference innovation. Leading frontier model providers are increasingly embracing vLLM, solidifying its critical role in shaping gen AI's future. Introducing Red Hat AI Inference Server Red Hat AI Inference Server packages the leading innovation of vLLM and forges it into the enterprise-grade capabilities of Red Hat AI Inference Server. Red Hat AI Inference Server is available as a standalone containerized offering or as part of both RHEL AI and Red Hat OpenShift AI. Across any deployment environment, Red Hat AI Inference Server provides users with a hardened, supported distribution of vLLM, along with: Intelligent LLM compression tools for dramatically reducing the size of both foundational and fine-tuned AI models, minimizing compute consumption while preserving and potentially enhancing model accuracy. Optimized model repository , hosted in the Red Hat AI organization on Hugging Face , offers instant access to a validated and optimized collection of leading AI models ready for inference deployment, helping to accelerate efficiency by 2-4x without compromising model accuracy. Red Hat's enterprise support and decades of expertise in bringing community projects to production environments. Third-party support for even greater deployment flexibility, enabling Red Hat AI Inference Server to be deployed on non-Red Hat Linux and Kubernetes platforms pursuant to Red Hat's third-party support policy . Red Hat's vision: Any model, any accelerator, any cloud. The future of AI must be defined by limitless opportunity, not constrained by infrastructure silos. Red Hat sees a horizon where organizations can deploy any model, on any accelerator, across any cloud, delivering an exceptional, more consistent user experience without exorbitant costs. To unlock the true potential of gen AI investments, enterprises require a universal inference platform – a standard for more seamless, high-performance AI innovation, both today and in the years to come. Just as Red Hat pioneered the open enterprise by transforming Linux into the bedrock of modern IT, the company is now poised to architect the future of AI inference. vLLM's potential is that of a linchpin for standardized gen AI inference, and Red Hat is committed to building a thriving ecosystem around not just the vLLM community but also llm-d for distributed inference at scale. The vision is clear: regardless of the AI model, the underlying accelerator or the deployment environment, Red Hat intends to make vLLM the definitive open standard for inference across the new hybrid cloud. Red Hat Summit Join the Red Hat Summit keynotes to hear the latest from Red Hat executives, customers and partners: Supporting Quotes Joe Fernandes, vice president and general manager, AI Business Unit, Red Hat 'Inference is where the real promise of gen AI is delivered, where user interactions are met with fast, accurate responses delivered by a given model, but it must be delivered in an effective and cost-efficient way. Red Hat AI Inference Server is intended to meet the demand for high-performing, responsive inference at scale while keeping resource demands low, providing a common inference layer that supports any model, running on any accelerator in any environment.' Ramine Roane, corporate vice president, AI Product Management, AMD 'In collaboration with Red Hat, AMD delivers out-of-the-box solutions to drive efficient generative AI in the enterprise. Red Hat AI Inference Server enabled on AMD Instinct™ GPUs equips organizations with enterprise-grade, community-driven AI inference capabilities backed by fully validated hardware accelerators.' Jeremy Foster, senior vice president and general manager, Cisco 'AI workloads need speed, consistency, and flexibility, which is exactly what the Red Hat AI Inference Server is designed to deliver. This innovation offers Cisco and Red Hat opportunities to continue to collaborate on new ways to make AI deployments more accessible, efficient and scalable—helping organizations prepare for what's next.' Bill Pearson, vice president, Data Center & AI Software Solutions and Ecosystem, Intel 'Intel is excited to collaborate with Red Hat to enable Red Hat AI Inference Server on Intel® Gaudi® accelerators. This integration will provide our customers with an optimized solution to streamline and scale AI inference, delivering advanced performance and efficiency for a wide range of enterprise AI applications.' John Fanelli, vice president, Enterprise Software, NVIDIA 'High-performance inference enables models and AI agents not just to answer, but to reason and adapt in real time. With open, full-stack NVIDIA accelerated computing and Red Hat AI Inference Server, developers can run efficient reasoning at scale across hybrid clouds, and deploy with confidence using Red Hat Inference Server with the new NVIDIA Enterprise AI validated design.' Additional Resources Connect with Red Hat

Red Hat Optimizes Red Hat AI to Speed Enterprise AI Deployments Across Models, AI Accelerators and Clouds - Middle East Business News and Information
Red Hat Optimizes Red Hat AI to Speed Enterprise AI Deployments Across Models, AI Accelerators and Clouds - Middle East Business News and Information

Mid East Info

time22-05-2025

  • Business
  • Mid East Info

Red Hat Optimizes Red Hat AI to Speed Enterprise AI Deployments Across Models, AI Accelerators and Clouds - Middle East Business News and Information

Red Hat AI Inference Server, validated models and integration of Llama Stack and Model Context Protocol help users deliver higher-performing, more consistent AI applications and agents Red Hat, the world's leading provider of open source solutions, today continues to deliver customer choice in enterprise AI with the introduction of Red Hat AI Inference Server, Red Hat AI third-party validated models and the integration of Llama Stack and Model Context Protocol (MCP) APIs, along with significant updates across the Red Hat AI portfolio. With these developments, Red Hat intends to further advance the capabilities organizations need to accelerate AI adoption while providing greater customer choice and confidence in generative AI (gen AI) production deployments across the hybrid cloud. According to Forrester, open source software will be the spark for accelerating enterprise AI efforts.1 As the AI landscape grows more complex and dynamic, Red Hat AI Inference Server and third party validated models provide efficient model inference and a tested collection of AI models optimized for performance on the Red Hat AI platform. Coupled with the integration of new APIs for gen AI agent development, including Llama Stack and MCP, Red Hat is working to tackle deployment complexity, empowering IT leaders, data scientists and developers to accelerate AI initiatives with greater control and efficiency. Efficient inference across the hybrid cloud with Red Hat AI Inference Server: The Red Hat AI portfolio now includes the new Red Hat AI Inference Server, providing faster, more consistent and cost-effective inference at scale across hybrid cloud environments. This key addition is integrated into the latest releases of Red Hat OpenShift AI and Red Hat Enterprise Linux AI, and is also available as a standalone offering, enabling organizations to deploy intelligent applications with greater efficiency, flexibility and performance. Tested and optimized models with Red Hat AI third party validated models Red Hat AI third party validated models, available on Hugging Face, make it easier for enterprises to find the right models for their specific needs. Red Hat AI offers a collection of validated models, as well as deployment guidance to enhance customer confidence in model performance and outcome reproducibility. Select models are also optimized by Red Hat, leveraging model compression techniques to reduce size and increase inference speed, helping to minimize resource consumption and operating costs. Additionally, the ongoing model validation process helps Red Hat AI customers continue to stay at the forefront of optimized gen AI innovation. Standardized APIs for AI application and agent development with Llama Stack and MCP Red Hat AI is integrating Llama Stack, initially developed by Meta, along with Anthropic's MCP, to provide users with standardized APIs for building and deploying AI applications and agents. Currently available in developer preview in Red Hat AI, Llama Stack provides a unified API to access inference with vLLM, retrieval-augmented generation (RAG), model evaluation, guardrails and agents, across any gen AI model. MCP enables models to integrate with external tools by providing a standardized interface for connecting APIs, plugins and data sources in agent workflows. The latest release of Red Hat OpenShift AI (v2.20) delivers additional enhancements for building, training, deploying and monitoring both gen AI and predictive AI models at scale. These include: Optimized model catalog (technology preview) provides easy access to validated Red Hat and third party models, enables the deployment of these models on Red Hat OpenShift AI clusters through the web console interface and manages the lifecycle of those models leveraging Red Hat OpenShift AI's integrated registry. Distributed training through the KubeFlow Training Operator enables the scheduling and execution of InstructLab model tuning and other PyTorch-based training and tuning workloads, distributed across multiple Red Hat OpenShift nodes and GPUs and includes distributed RDMA networking–acceleration and optimized GPU utilization to reduce costs. Feature store (technology preview), based on the upstream Kubeflow Feast project, provides a centralized repository for managing and serving data for both model training and inference, streamlining data workflows to improve model accuracy and reusability. Red Hat Enterprise Linux AI 1.5 brings new updates to Red Hat's foundation model platform for developing, testing and running large language models (LLMs). Key features in version 1.5 include: Google Cloud Marketplace availability, expanding the customer choice for running Red Hat Enterprise Linux AI in public cloud environments–along with AWS and Azure–to help simplify the deployment and management of AI workloads on Google Cloud. Enhanced multi-language capabilities for Spanish, German, French and Italian via InstructLab, allowing for model customization using native scripts and unlocking new possibilities for multilingual AI applications. Users can also bring their own teacher models for greater control over model customization and testing for specific use cases and languages, with future support planned for Japanese, Hindi and Korean. The Red Hat AI InstructLab on IBM Cloud service is also now generally available. This new cloud service further streamlines the model customization process, improving scalability and user experience, empowering enterprises to make use of their unique data with greater ease and control. Red Hat's vision: Any model, any accelerator, any cloud. The future of AI must be defined by limitless opportunity, not constrained by infrastructure silos. Red Hat sees a horizon where organizations can deploy any model, on any accelerator, across any cloud, delivering an exceptional, more consistent user experience without exorbitant costs. To unlock the true potential of gen AI investments, enterprises require a universal inference platform–a standard for more seamless, high-performance AI innovation, both today and in the years to come. Red Hat Summit: Join the Red Hat Summit keynotes to hear the latest from Red Hat executives, customers and partners: Modernized infrastructure meets enterprise-ready AI — Tuesday, May 20, 8-10 a.m. EDT (YouTube) Hybrid cloud evolves to deliver enterprise innovation — Wednesday, May 21, 8-9:30 a.m. EDT (YouTube) Supporting Quotes: Joe Fernandes, vice president and general manager, AI Business Unit, Red Hat 'Faster, more efficient inference is emerging as the newest decision point for gen AI innovation. Red Hat AI, with enhanced inference capabilities through Red Hat AI Inference Server and a new collection of validated third-party models, helps equip organizations to deploy intelligent applications where they need to, how they need to and with the components that best meet their unique needs.' Michele Rosen, research manager, IDC 'Organizations are moving beyond initial AI explorations and are focused on practical deployments. The key to their continued success lies in the ability to be adaptable with their AI strategies to fit various environments and needs. The future of AI not only demands powerful models, but models that can be deployed with ability and cost-effectiveness. Enterprises seeking to scale their AI initiatives and deliver business value will find this flexibility absolutely essential.' About Red Hat: Red Hat is the open hybrid cloud technology leader, delivering a trusted, consistent and comprehensive foundation for transformative IT innovation and AI applications. Its portfolio of cloud, developer, AI, Linux, automation and application platform technologies enables any application, anywhere—from the datacenter to the edge. As the world's leading provider of enterprise open source software solutions, Red Hat invests in open ecosystems and communities to solve tomorrow's IT challenges. Collaborating with partners and customers, Red Hat helps them build, connect, automate, secure and manage their IT environments, supported by consulting services and award-winning training and certification offerings. Forward-Looking Statements: Except for the historical information and discussions contained herein, statements contained in this press release may constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are based on the company's current assumptions regarding future business and financial performance. These statements involve a number of risks, uncertainties and other factors that could cause actual results to differ materially. Any forward-looking statement in this press release speaks only as of the date on which it is made. Except as required by law, the company assumes no obligation to update or revise any forward-looking statements.

Red Hat launches enterprise AI inference server for hybrid cloud
Red Hat launches enterprise AI inference server for hybrid cloud

Techday NZ

time21-05-2025

  • Business
  • Techday NZ

Red Hat launches enterprise AI inference server for hybrid cloud

Red Hat has introduced Red Hat AI Inference Server, an enterprise-grade offering aimed at enabling generative artificial intelligence (AI) inference across hybrid cloud environments. The Red Hat AI Inference Server emerges as an offering that leverages the vLLM community project, initially started by the University of California, Berkeley. Through Red Hat's integration of Neural Magic technologies, the solution aims to deliver higher speed, improved efficiency with a range of AI accelerators, and reduced operational costs. The platform is designed to allow organisations to run generative AI models on any AI accelerator within any cloud infrastructure. The solution can be deployed as a standalone containerised offering or as part of Red Hat Enterprise Linux AI (RHEL AI) and Red Hat OpenShift AI. Red Hat says this approach is intended to empower enterprises to deploy and scale generative AI in production with increased confidence. Joe Fernandes, Vice President and General Manager for Red Hat's AI Business Unit, commented on the launch: "Inference is where the real promise of gen AI is delivered, where user interactions are met with fast, accurate responses delivered by a given model, but it must be delivered in an effective and cost-efficient way. Red Hat AI Inference Server is intended to meet the demand for high-performing, responsive inference at scale while keeping resource demands low, providing a common inference layer that supports any model, running on any accelerator in any environment." The inference phase in AI refers to the process where pre-trained models are used to generate outputs, a stage which can be a significant inhibitor to performance and cost efficiency if not managed appropriately. The increasing complexity and scale of generative AI models have highlighted the need for robust inference solutions capable of handling production deployments across diverse infrastructures. The Red Hat AI Inference Server builds on the technology foundation established by the vLLM project. vLLM is known for high-throughput AI inference, ability to handle large input context, acceleration over multiple GPUs, and continuous batching to enhance deployment versatility. Additionally, vLLM extends support to a broad range of publicly available models, including DeepSeek, Google's Gemma, Llama, Llama Nemotron, Mistral, and Phi, among others. Its integration with leading models and enterprise-grade reasoning capabilities places it as a candidate for a standard in AI inference innovation. The packaged enterprise offering delivers a supported and hardened distribution of vLLM, with several additional tools. These include intelligent large language model (LLM) compression utilities to reduce AI model sizes while preserving or enhancing accuracy, and an optimised model repository hosted under Red Hat AI on Hugging Face. This repository enables instant access to validated and optimised AI models tailored for inference, designed to help improve efficiency by two to four times without the need to compromise on the accuracy of results. Red Hat also provides enterprise support, drawing upon expertise in bringing community-developed technologies into production. For expanded deployment options, the Red Hat AI Inference Server can be run on non-Red Hat Linux and Kubernetes platforms in line with the company's third-party support policy. The company's stated vision is to enable a universal inference platform that can accommodate any model, run on any accelerator, and be deployed in any cloud environment. Red Hat sees the success of generative AI relying on the adoption of such standardised inference solutions to ensure consistent user experiences without increasing costs. Ramine Roane, Corporate Vice President of AI Product Management at AMD, said: "In collaboration with Red Hat, AMD delivers out-of-the-box solutions to drive efficient generative AI in the enterprise. Red Hat AI Inference Server enabled on AMD InstinctTM GPUs equips organizations with enterprise-grade, community-driven AI inference capabilities backed by fully validated hardware accelerators." Jeremy Foster, Senior Vice President and General Manager at Cisco, commented on the joint opportunities provided by the offering: "AI workloads need speed, consistency, and flexibility, which is exactly what the Red Hat AI Inference Server is designed to deliver. This innovation offers Cisco and Red Hat opportunities to continue to collaborate on new ways to make AI deployments more accessible, efficient and scalable—helping organizations prepare for what's next." Intel's Bill Pearson, Vice President of Data Center & AI Software Solutions and Ecosystem, said: "Intel is excited to collaborate with Red Hat to enable Red Hat AI Inference Server on Intel Gaudi accelerators. This integration will provide our customers with an optimized solution to streamline and scale AI inference, delivering advanced performance and efficiency for a wide range of enterprise AI applications." John Fanelli, Vice President of Enterprise Software at NVIDIA, added: "High-performance inference enables models and AI agents not just to answer, but to reason and adapt in real time. With open, full-stack NVIDIA accelerated computing and Red Hat AI Inference Server, developers can run efficient reasoning at scale across hybrid clouds, and deploy with confidence using Red Hat Inference Server with the new NVIDIA Enterprise AI validated design." Red Hat has stated its intent to further build upon the vLLM community as well as drive development of distributed inference technologies such as llm-d, aiming to establish vLLM as an open standard for inference in hybrid cloud environments.

Red Hat unveils enhanced AI tools for hybrid cloud deployments
Red Hat unveils enhanced AI tools for hybrid cloud deployments

Techday NZ

time21-05-2025

  • Business
  • Techday NZ

Red Hat unveils enhanced AI tools for hybrid cloud deployments

Red Hat has expanded its AI portfolio, introducing Red Hat AI Inference Server along with validated models and new API integrations, aimed at enabling more efficient enterprise AI deployments across diverse environments. Red Hat AI Inference Server, now included in the Red Hat AI suite, provides scalable, consistent, and cost-effective inference for hybrid cloud setups. This server is integrated into the newest releases of both Red Hat OpenShift AI and Red Hat Enterprise Linux AI, while also being available as a standalone product. The offering is designed to optimise performance, flexibility, and resource usage for organisations deploying AI-driven applications. To address the challenge many enterprises face in model selection and deployment, Red Hat has announced availability of third party validated AI models, accessible on Hugging Face. These models are tested to ensure optimal performance on the Red Hat AI platform. Red Hat also offers deployment guidance to assist customers, with select models benefiting from model compression techniques to reduce their size and increase inference speed. This approach is intended to minimise computational resources and operating costs, while the validation process helps customers remain current with the latest in generative AI innovation. The company has begun integrating the Llama Stack, developed by Meta, alongside Anthropic's Model Context Protocol (MCP), offering standardised APIs for building and deploying AI applications and agents. Currently available in developer preview in Red Hat AI, Llama Stack delivers a unified API that includes support for inference with vLLM, retrieval-augmented generation, model evaluation, guardrails, and agent functionality. MCP, meanwhile, enables AI models to connect with external tools using a standardised interface, facilitating API and plugin integrations during agent workflows. The new version of Red Hat OpenShift AI (v2.20) introduces enhancements that support the development, training, deployment, and monitoring of both generative and predictive AI models at scale. A technology preview model catalogue offers access to validated Red Hat and third party models, while distributed training capabilities via the KubeFlow Training Operator enable efficient scheduling and execution of AI model tuning across multiple nodes and GPUs. This includes support for remote direct memory access (RDMA) networking and optimised GPU utilisation, reducing operational costs. A feature store based on the Kubeflow Feast project is also available in technology preview, providing a central repository for managing and serving data, intended to improve accuracy and reusability of models. Red Hat Enterprise Linux AI 1.5 introduces updates that extend the platform's reach and its multi-language support. The system is now available on Google Cloud Marketplace, which expands customer options for running AI workloads in public cloud platforms including AWS and Azure. Enhanced language capabilities for Spanish, German, French, and Italian have been added through InstructLab, enabling model customisation in these languages. Customers are also able to bring their own teacher models for detailed tuning, with support for Japanese, Hindi, and Korean planned for the future. Additionally, the Red Hat AI InstructLab on IBM Cloud service is now generally available, aimed at simplifying model customisation and improving scalability for customers wishing to use unique data sets for AI development. Red Hat states its long-term aim is to provide a universal inference platform that allows organisations to deploy any AI model on any accelerator and across any cloud provider. The company's approach seeks to help enterprises avoid infrastructure silos and better realise the value of their investments in generative AI. Joe Fernandes, Vice President and General Manager of the AI Business Unit at Red Hat, said, "Faster, more efficient inference is emerging as the newest decision point for gen AI innovation. Red Hat AI, with enhanced inference capabilities through Red Hat AI Inference Server and a new collection of validated third-party models, helps equip organisations to deploy intelligent applications where they need to, how they need to and with the components that best meet their unique needs." Michele Rosen, Research Manager at IDC, commented on shifting enterprise AI needs: "Organisations are moving beyond initial AI explorations and are focused on practical deployments. The key to their continued success lies in the ability to be adaptable with their AI strategies to fit various environments and needs. The future of AI not only demands powerful models, but models that can be deployed with ability and cost-effectiveness. Enterprises seeking to scale their AI initiatives and deliver business value will find this flexibility absolutely essential." Red Hat's recent portfolio enhancements are in line with the views outlined by Forrester, which stated open source software will be instrumental in accelerating enterprise AI programmes.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store