
Intel unveils Xeon 6 CPUs to boost GPU-driven AI systems
The new processors feature Performance-cores (P-cores) and incorporate Intel's Priority Core Turbo (PCT) technology and Intel Speed Select Technology – Turbo Frequency (Intel SST-TF). These features allow for customisable CPU core frequencies, which are expected to support GPU performance across demanding AI workloads.
One of the newly released Xeon 6 processors, the Intel Xeon 6776P, serves as the host CPU in the NVIDIA DGX B300 AI-accelerated system. The Xeon 6776P manages, orchestrates, and supports the overall AI-accelerated system within the DGX B300 architecture. With an extensive memory capacity and robust bandwidth, the processor is engineered to cater to the expanding requirements of AI models and large datasets.
Karin Eibschitz Segal, Corporate Vice President and Interim General Manager of the Data Center Group at Intel, commented, "These new Xeon SKUs demonstrate the unmatched performance of Intel Xeon 6, making it the ideal CPU for next-gen GPU-accelerated AI systems. We're thrilled to deepen our collaboration with NVIDIA to deliver one of the industry's highest-performing AI systems, helping accelerate AI adoption across industries."
According to Intel, the pairing of Priority Core Turbo and Intel SST-TF brings significant advancements to AI system efficiency. The PCT technology allows high-priority cores to run at increased turbo frequencies for time-critical tasks, while lower-priority cores operate at base speed. This segregation ensures an optimal allocation of CPU resources, which is regarded as crucial for AI tasks that require significant sequential or serial processing. The intent is to enable CPUs to feed data to GPUs more rapidly, streamlining system effectiveness in demanding scenarios.
The new Xeon 6 processors with P-cores are built to offer high core counts and improved single-threaded performance. Each processor can reach up to 128 P-cores, aimed at facilitating a balanced distribution of intensive AI workloads across the available cores.
Intel reports that memory speeds with the Xeon 6 processors can be 30% faster compared to competing solutions, particularly in high-capacity DRAM configurations. This is supported by both latest MRDIMMs and Compute Express Link standards, which are designed to improve memory bandwidth for data-intensive applications.
The processors also offer greater input/output capabilities, providing up to 20% more PCIe lanes than earlier generations. This feature is intended to enhance data transfer rates for input/output-intensive workloads commonly found in enterprise AI and data centre applications.
Intel emphasises strong reliability and serviceability features in the design of these new CPUs, aiming for extended system uptime and reduced risk of disruptions to enterprise operations. Additional support comes from Intel Advanced Matrix Extensions, which enable FP16 precision arithmetic for efficient data preprocessing and critical CPU-driven tasks within AI environments.
The adoption of these processors is intended for enterprises looking to modernise infrastructure in anticipation of increasingly complex AI workloads. The energy efficiency and performance characteristics are targeted at a broad array of data centre and network applications.
Follow us on:
Share on:
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
3 days ago
- Techday NZ
HPE expands ProLiant servers & AI cloud with new NVIDIA GPUs
HPE has announced a series of updates to its NVIDIA AI Computing by HPE portfolio, emphasising expanded server capabilities and deeper integration with NVIDIA AI Enterprise solutions. New server models The HPE ProLiant Compute range will soon include servers featuring the NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, available in a 2U form factor. Two main configurations will be available: the HPE ProLiant DL385 Gen11 server, supporting up to two of the new GPUs in a 2U chassis, and the previously announced HPE ProLiant Compute DL380a Gen12 server, capable of using up to eight GPUs in a 4U form factor. According to HPE, the latter configuration will be shipping in September. These servers are designed for broad enterprise applications such as generative and agentic AI, robotics and industrial AI, visual computing (including autonomous vehicles and quality control monitoring), simulation, 3D modelling, digital twins, and enterprise applications. The HPE ProLiant Compute Gen12 servers also include hardware features aimed at enhancing security and operational efficiency, such as the HPE Integrated Lights Out (iLO) 7 Silicon Root of Trust and a secure enclave for tamper-resistant protection and quantum-resistant firmware signing. HPE estimates that its Compute Ops Management, a cloud-native tool for managing server lifecycles, can decrease IT hours for server management by up to 75 percent and reduce downtime by approximately 4.8 hours per server each year. HPE Private Cloud AI HPE's Private Cloud AI offering, which has been co-developed with NVIDIA, is being updated to support the latest NVIDIA GPU technologies and AI models. The next version will offer compatibility for NVIDIA RTX PRO 6000 GPUs on Gen12 servers and aims to provide seamless scalability across different GPU generations. Features will include air-gapped management for security and support for enterprise multi-tenancy. The new release of HPE Private Cloud AI will integrate recent NVIDIA AI models, including the NVIDIA Nemotron models focused on agentic AI, Cosmos Reason vision language model (VLM) for physical AI and robotics, and the NVIDIA Blueprint for Video Search and Summarization (VSS 2.4), intended to build video analytics AI agents that can process large volumes of video data. Customers will have access to these developments through the HPE AI Essentials platform, enabling the quick deployment of NVIDIA NIM microservices and other AI tools. Through the continued collaboration, HPE Private Cloud AI is designed to deliver an integrated solution that leverages NVIDIA's portfolio in AI accelerated computing, networking, and software. This enables businesses to address increasing demand for AI inferencing and to accelerate the development and deployment of AI systems, maintaining high security and control over enterprise data. Collaboration and customer impact "HPE is committed to empowering enterprises with the tools they need to succeed in the age of AI," said Cheri Williams, senior vice president and general manager for private cloud and flex solutions at HPE. "Our collaboration with NVIDIA continues to push the boundaries of innovation, delivering solutions that unlock the value of generative, agentic and physical AI while addressing the unique demands of enterprise workloads. With the combination of HPE ProLiant servers and expanded capabilities in HPE Private Cloud AI, we're enabling organizations to embrace the future of AI with confidence and agility." Justin Boitano, vice president of enterprise AI at NVIDIA, commented, "Enterprises need flexible, efficient infrastructure to keep pace with the demands of modern AI. With NVIDIA RTX PRO 6000 Blackwell GPUs in HPE's 2U ProLiant servers, enterprises can accelerate virtually every workload on a single, unified, enterprise-ready platform." Availability According to HPE, the HPE ProLiant DL385 Gen11 and HPE ProLiant Compute DL380a Gen12 servers featuring the NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs are now orderable and are set for distribution worldwide beginning 2 September 2025. Support within HPE Private Cloud AI for the latest NVIDIA Nemotron models, Cosmos Reason, and the NVIDIA Blueprint for VSS 2.4 is scheduled for release in the second half of 2025. The next generation of HPE Private Cloud AI with the updated GPU capabilities is also expected to be available during this period.


Techday NZ
5 days ago
- Techday NZ
HPE expands AI server range with NVIDIA Blackwell GPU solutions
Hewlett Packard Enterprise has introduced several updates to its NVIDIA AI Computing by HPE portfolio, aimed at supporting enterprise clients seeking to accelerate agentic and physical AI deployment across a variety of use cases. Server advancements Among the headline updates, HPE has confirmed it will ship new HPE ProLiant Compute servers equipped with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. This includes a new 2U RTX PRO Server form factor in the DL385 Gen11 model, as well as an 8-GPU 4U configuration with the DL380a Gen12 model. According to HPE, the DL385 Gen11 supports up to two of the Blackwell Server Edition GPUs, providing an air-cooled solution suitable for datacentres coping with increasing artificial intelligence workloads. Meanwhile, the DL380a Gen12 can accommodate up to eight GPUs in a larger form factor, with shipments scheduled to begin in September 2025. HPE highlighted that the ProLiant Compute servers are purpose-built for handling a variety of tasks, including generative and agentic AI, robotics, industrial automation, visual computing, simulation, 3D modelling, digital twins, and autonomous systems. Security features on the Gen12 models include HPE Integrated Lights Out 7 Silicon Root of Trust and a secure enclave for tamper-resistant protection and quantum-resistant firmware signing. The company states that its server management platform, HPE Compute Ops Management, can reduce IT hours spent on server management by up to 75% and lower downtime by an average of 4.8 hours per server annually. HPE has also indicated that these servers are designed to be flexible and scalable, able to support a growing range of GPU-accelerated workloads across the enterprise. AI development platform HPE Private Cloud AI, a collaborative development with NVIDIA, will incorporate support for the latest NVIDIA AI models. This includes the NVIDIA Nemotron agentic AI model, Cosmos Reason vision language model for robotics and physical AI, and the NVIDIA Blueprint for Video Search and Summarization (VSS 2.4). These additions will allow customers to build and deploy video analytics AI agents that can process extensive volumes of video data and extract actionable insights. The new release promises seamless scalability across GPU generations, air-gapped management, and enterprise multi-tenancy. Continuous integration with NVIDIA technologies will also allow HPE Private Cloud AI to deliver rapid deployment of NVIDIA NIM microservices, with access provided via HPE AI Essentials. The platform is positioned to help enterprises handle increasing AI inferencing workloads while retaining control over their data, supporting high performance and security requirements in demanding sectors. Regional and industry response "Asia Pacific is one of the fastest-growing AI markets, and enterprises face the imperative to transform ambition into results, with agility and security at the core," said Joseph Yang, General Manager, HPC, AI & NonStop, at HPE APAC and India. "With NVIDIA Blackwell GPUs in our HPE ProLiant servers and the latest NVIDIA AI models in HPE Private Cloud AI, we're enabling customers across APAC to accelerate agentic and physical AI, powering everything from advanced manufacturing to smart cities, while safeguarding data sovereignty and maximizing operational efficiency." Data sovereignty and operational efficiency were also cited as important capabilities for regional customers working in sectors such as advanced manufacturing and public infrastructure. "HPE is committed to empowering enterprises with the tools they need to succeed in the age of AI," said Cheri Williams, Senior Vice President and General Manager for Private Cloud and Flex Solutions at HPE. "Our collaboration with NVIDIA continues to push the boundaries of innovation, delivering solutions that unlock the value of generative, agentic and physical AI while addressing the unique demands of enterprise workloads. With the combination of HPE ProLiant servers and expanded capabilities in HPE Private Cloud AI, we're enabling organizations to embrace the future of AI with confidence and agility." The collaboration between HPE and NVIDIA is expected to support customers managing large-scale enterprise AI workloads, with the infrastructure designed to be as flexible and scalable as present and emerging tasks require. "Enterprises need flexible, efficient infrastructure to keep pace with the demands of modern AI," said Justin Boitano, Vice President of Enterprise AI at NVIDIA. "With NVIDIA RTX PRO 6000 Blackwell GPUs in HPE's 2U ProLiant servers, enterprises can accelerate virtually every workload on a single, unified, enterprise-ready platform." Availability The HPE ProLiant DL385 Gen11 and DL380a Gen12 servers equipped with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs are currently open for orders, with first shipments expected from September 2025. HPE intends to roll out support for the newest NVIDIA AI models, the Cosmos Reason VLM, and the VSS 2.4 blueprint in HPE Private Cloud AI during the latter half of 2025. The next generation of HPE Private Cloud AI, with Blackwell GPU support, is also slated for release in the same period.


Techday NZ
6 days ago
- Techday NZ
HPE launches ProLiant servers with new NVIDIA GPUs for AI growth
HPE has announced new developments to its NVIDIA AI Computing by HPE Portfolio, focusing on improved integration with NVIDIA AI Enterprise and the introduction of updated NVIDIA AI models and blueprints to HPE Private Cloud AI. Server enhancements HPE will offer HPE ProLiant Compute servers equipped with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, introducing a 2U form factor designed to meet the rising AI demands within enterprise data centres. According to the announcement, two main server configurations will be available. The HPE ProLiant DL385 Gen11 server supports up to two NVIDIA RTX PRO 6000 Blackwell GPUs in the new 2U form factor. The HPE ProLiant Compute DL380a Gen12 server supports up to eight NVIDIA RTX PRO 6000 GPUs in a 4U form factor and is scheduled to ship in September. These servers are intended to support diverse workloads, from generative and agentic AI to physical AI use cases, which include robotics, industrial automation, visual computing, and simulation. HPE highlights that its Gen12 ProLiant Compute servers employ multi-layered security, using HPE Integrated Lights Out (iLO) 7 Silicon Root of Trust and a secure enclave to deliver tamper-resistant protection and quantum-resistant firmware signing. Lifecycle automation is managed through HPE Compute Ops Management, which, the company states, can reduce IT hours for server management by up to 75% and decrease downtime by an average of 4.8 hours per server annually. The servers are positioned to address escalating enterprise requirements for GPU-accelerated compute power, offering flexibility to innovate and enhance productivity, security, and operational efficiency across enterprise operations. Private cloud AI advancements The company recently shared details about the upcoming generation of HPE Private Cloud AI, expected to be released later in the year. This expansion supports NVIDIA RTX PRO 6000 GPUs, enables scalability across GPU generations, and provides features like air-gapped management and enterprise multi-tenancy. HPE Private Cloud AI, co-developed with NVIDIA, will add support for the latest NVIDIA Nemotron models for agentic AI, Cosmos Reason vision language model for physical AI and robotics, and NVIDIA's Blueprint for Video Search and Summarisation (VSS 2.4). These features are designed to assist customers in building video analytics AI agents capable of analysing large volumes of video data for valuable insights. The company emphasises the continuous co-development between HPE and NVIDIA in order to facilitate the fastest deployment of NVIDIA NIM microservices for the latest AI models and blueprints. Customers can access these features via HPE AI Essentials. The overall aim is to provide enterprises with the infrastructure needed to address increasing demand for AI inferencing and accelerate the production of AI solutions, while ensuring data control and high-performance operation. Industry perspectives "HPE is committed to empowering enterprises with the tools they need to succeed in the age of AI," said Cheri Williams, Senior Vice President and General Manager for Private Cloud and Flex Solutions at HPE. "Our collaboration with NVIDIA continues to push the boundaries of innovation, delivering solutions that unlock the value of generative, agentic and physical AI while addressing the unique demands of enterprise workloads. With the combination of HPE ProLiant servers and expanded capabilities in HPE Private Cloud AI, we're enabling organisations to embrace the future of AI with confidence and agility." Justin Boitano, Vice President of Enterprise AI at NVIDIA, commented on the server integration, stating: "Enterprises need flexible, efficient infrastructure to keep pace with the demands of modern AI." "With NVIDIA RTX PRO 6000 Blackwell GPUs in HPE's 2U ProLiant servers, enterprises can accelerate virtually every workload on a single, unified, enterprise-ready platform." Release schedule The HPE ProLiant DL385 Gen11 and HPE ProLiant Compute DL380a Gen12 servers featuring NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs are available to order now, with shipping scheduled to begin globally in September 2025. Support for NVIDIA Nemotron models, Cosmos Reason, and NVIDIA VSS 2.4 Blueprint within HPE Private Cloud AI is planned for the second half of 2025. The latest generation of HPE Private Cloud AI with RTX PRO 6000 Blackwell GPUs will also be available in the latter part of the year.