logo
#

Latest news with #DellProMaxPlus

Dell unveils AI infrastructure push with NVIDIA, Qualcomm
Dell unveils AI infrastructure push with NVIDIA, Qualcomm

Yahoo

time20-05-2025

  • Business
  • Yahoo

Dell unveils AI infrastructure push with NVIDIA, Qualcomm

-- Dell Technologies Inc (NYSE:DELL) introduced a sweeping series of upgrades to its AI Factory platform this week, deepening its partnership with NVIDIA Corporation (NASDAQ:NVDA) and expanding its hardware and software portfolio to support enterprise-scale AI. From data centers to the edge, the company is positioning itself as a full-stack provider of infrastructure designed to simplify and accelerate artificial intelligence deployment. New hardware launches include updated PowerEdge XE9785 and XE9785L servers supporting up to 256 NVIDIA Blackwell Ultra GPUs per rack for training and inferencing at scale. These systems promise 'up to four times faster large language model (LLM) training' compared to prior generations, according to Dell. Dell is also extending its reach to the edge with the new Dell Pro Max Plus laptop, featuring the industry's first enterprise-grade discrete NPU in a mobile form factor. The Qualcomm Incorporated (NASDAQ:QCOM) AI 100 PC Inference Card inside the device offers 32 AI cores and 64 GB of memory, designed to run large AI models locally that are typically reserved for the cloud. Cooling innovation is a major focus, with Dell unveiling its PowerCool Enclosed Rear Door Heat Exchanger, which can cut data center cooling costs by up to 60%. Supporting up to 80 kW of air-cooled capacity per rack, it enables up to 16% more compute density without increasing power consumption. On the software side, Dell launched Project Lightning, which it describes as the world's fastest parallel file system, delivering double the throughput of competing solutions. Enhancements to Dell's Data Lakehouse and ObjectScale offerings aim to streamline workflows for high-performance workloads like semantic search and AI-powered recommendation engines. 'AI factories are the infrastructure of modern industry, generating intelligence to power work across healthcare, finance and manufacturing,' said Jensen Huang, founder and CEO of NVIDIA. 'With Dell Technologies, we're offering the broadest line of Blackwell AI systems to serve AI factories in clouds, enterprises and at the edge.' Dell is broadening its AI Factory ecosystem through collaborations with partners including Google (NASDAQ:GOOGL), Meta (NASDAQ:META), Cohere, Glean and Mistral. These integrations enable customers to build secure, on-premise AI agents and deploy enterprise search and conversational AI using LLMs such as Llama 4 and future-generation models. 'It has been a non-stop year of innovating for enterprises, and we're not slowing down,' said Jeff Clarke, chief operating officer of Dell Technologies. 'Our latest AI advancements — from groundbreaking AI PCs to cutting-edge data center solutions — are designed to help organizations of every size to seamlessly adopt AI, drive faster insights, improve efficiency and accelerate their results.' Related articles Dell unveils AI infrastructure push with NVIDIA, Qualcomm NVIDIA's Huang blasts U.S. export rules, points to $15 billion in missed sales Moody's downgrades JPMorgan, Bank of America, Wells Fargo in blow to U.S. banks Sign in to access your portfolio

Dell launches new AI innovations for enterprise & research
Dell launches new AI innovations for enterprise & research

Techday NZ

time20-05-2025

  • Business
  • Techday NZ

Dell launches new AI innovations for enterprise & research

Dell has announced a range of advancements in enterprise AI infrastructure, solutions and services to support organisations seeking to adopt and scale artificial intelligence. The company reported that 75% of organisations view AI as central to their strategy, but high costs and data security concerns remain significant obstacles. Dell aims to address these challenges by simplifying deployment, reducing expenses and enabling secure, scalable AI adoption through its expanded Dell AI Factory, enhanced infrastructure and an expanding partner ecosystem. The Dell AI Factory, launched a year ago, has received more than 200 updates and now supports AI workloads at any scale with new infrastructure, software improvements and collaborations with partners such as NVIDIA, Meta and Google. The company states its approach to on-premises AI inference can be up to 62% more cost effective for large language models compared to public cloud options. Among the notable product introductions is the Dell Pro Max Plus laptop, equipped with the Qualcomm AI 100 PC Inference Card, which the company states is the world's first mobile workstation to include an enterprise-grade discrete NPU. This platform is intended to provide rapid, secure on-device inferencing for large AI models, facilitating edge deployments outside traditional data centres. The Qualcomm AI 100 PC Inference Card offers 32 AI-cores and 64GB of memory to support engineers and scientists working with sizeable data models. Addressing the energy demands of AI workloads, Dell introduced the PowerCool Enclosed Rear Door Heat Exchanger (eRDHx), designed to capture 100% of IT-generated heat and reduce cooling costs by up to 60% compared to current solutions. This innovation supports water temperatures between 32°C and 36°C and enables increased data centre density, allowing organisations to deploy up to 16% more racks of dense compute without raising power consumption, and provides advanced leak detection and unified management features. For high performance computing and AI, Dell's PowerEdge XE9785 and XE9785L servers will support AMD Instinct MI350 series GPUs, promising up to 35 times greater inferencing performance. Both liquid- and air-cooled versions will be available to further reduce facility cooling costs. In terms of data management, Dell's AI Data Platform now includes updates designed to improve access to structured and unstructured data, with Project Lightning, a parallel file system, reported to deliver up to two times greater throughput than alternatives. Enhancements to the Data Lakehouse further streamline AI workflows for use cases like recommendation engines and semantic search, and the introduction of Linear Pluggable Optics aims to lower power use and boost networking efficiency. Dr Paul Calleja, Director of the Cambridge Open Zettascale Lab and Research Computing Services at the University of Cambridge, commented: "We're excited to work with Dell to support our cutting-edge AI initiatives, and we expect Project Lightning to be a critical storage technology for our AI innovations." Dell has also broadened its partner ecosystem to include on-premises deployments with platforms such as Cohere North, Google Gemini, Glean's Work AI platform and Meta's Llama Stack, as well as joint solutions with Mistral AI. The company is providing enhancements to its AI platform with AMD and Intel technologies, including upgraded networking, software stack improvements, container support and integration with Intel Gaudi 3 AI accelerators. Updates to the Dell AI Factory with NVIDIA include new PowerEdge servers supporting up to 192 NVIDIA Blackwell Ultra GPUs per standard configuration and up to 256 per Dell IR7000 rack with direct to chip liquid cooling. These advancements aim to simplify data centre integration, speed up rack-scale AI deployment and are reported to deliver up to four times faster large language model training compared to the previous generation. The PowerEdge XE9712, featuring NVIDIA GB300 NVL72, targets efficiency at rack scale for training and is said to offer up to 50 times more inference output and five times improvement in throughput, with new PowerCool technology supporting power efficiency in high-demand environments. The company intends to support the NVIDIA Vera CPU and Vera Rubin platform in future server offerings. In networking, Dell has extended its portfolio with new PowerSwitch and InfiniBand switches, deliver up to 800 Gbps of throughput, and are now supported by ProSupport and Deployment Services. Further software platform updates include direct availability of NVIDIA NIM, NeMo microservices and Blueprints, plus Red Hat OpenShift integration on the Dell AI Factory with NVIDIA. To streamline AI operations, Dell has introduced Managed Services for the AI Factory with NVIDIA, providing 24/7 monitoring, reporting, upgrades and patching for the stack, supported by Dell's technical teams. Michael Dell, Chairman and Chief Executive Officer, Dell Technologies, said: "We're on a mission to bring AI to millions of customers around the world. Our job is to make AI more accessible. With the Dell AI Factory with NVIDIA, enterprises can manage the entire AI lifecycle across use cases, from training to deployment, at any scale." Jensen Huang, Founder and Chief Executive Officer, NVIDIA, added: "AI factories are the infrastructure of modern industry, generating intelligence to power work across healthcare, finance and manufacturing. With Dell Technologies, we're offering the broadest line of Blackwell AI systems to serve AI factories in clouds, enterprises and at the edge." Jeff Clarke, Chief Operating Officer, Dell Technologies, stated: "It has been a non-stop year of innovating for enterprises, and we're not slowing down. We have introduced more than 200 updates to the Dell AI Factory since last year. Our latest AI advancements — from groundbreaking AI PCs to cutting-edge data centre solutions — are designed to help organisations of every size to seamlessly adopt AI, drive faster insights, improve efficiency and accelerate their results." Christopher M. Sullivan, Director of Research and Academic Computing for the College of Earth, Ocean and Atmospheric Sciences at Oregon State University, said: "We leverage the Dell AI Factory for our oceanic research at Oregon State University to revolutionise and address some of the planet's most critical challenges. Through advanced AI solutions, we're accelerating insights that empower global decision-makers to tackle climate change, safeguard marine ecosystems and drive meaningful progress for humanity."

Dell unveils new Pro Max AI PC & innovations for data centres
Dell unveils new Pro Max AI PC & innovations for data centres

Techday NZ

time19-05-2025

  • Business
  • Techday NZ

Dell unveils new Pro Max AI PC & innovations for data centres

Dell has announced the launch of the Dell Pro Max AI PC, which features what the company states is the industry's first enterprise-grade discrete NPU in a mobile form factor. The Dell Pro Max Plus laptop incorporates the Qualcomm AI 100 PC Inference Card, enabling on-device AI inferencing for large models that are typically run in the cloud, such as those with 109 billion parameters. According to Dell, this addition makes the Pro Max Plus the world's first mobile workstation to offer an enterprise-grade discrete NPU. The Qualcomm AI 100 PC Inference Card is equipped with 32 AI-cores and 64 GB of memory. This configuration is designed to support the requirements of AI engineers and data scientists who are working with substantial models for edge inferencing. The device is being positioned as a solution for organisations that seek faster and secure handling of AI workloads directly at the edge. The Pro Max AI PC is now available as part of the Dell AI Factory portfolio. Dell stated that this release is part of a wider suite of infrastructure updates intended to deliver performance for enterprise AI workload development and deployment across client devices, data centres, edge locations, and cloud environments. Alongside the new AI PC, Dell has introduced innovations aimed at improving data centre efficiency. Among these is the Dell PowerCool Enclosed Rear Door Heat Exchanger (eRDHx), which is engineered to capture all IT heat output with a self-contained airflow system, potentially reducing cooling energy costs by up to 60% compared to existing solutions. The technology allows data centres to use warmer water for cooling—between 32 and 36 degrees Celsius—removing reliance on traditional expensive chillers and enabling up to 16% more racks of dense compute capacity to be deployed without requiring additional power. Further enhancements target risk management, offering features such as advanced leak detection, real-time thermal monitoring, and integrated management through Dell's Rack Controller. According to Dell, air cooling capacity can reach up to 80 kW per rack for dense AI and high-performance computing applications. Dell also announced that its PowerEdge XE9785 and XE9785L servers will support AMD Instinct MI350 series GPUs, which deliver 288 GB of HBM3E memory per GPU and claim up to 35 times greater inferencing performance compared to previous systems. The servers will be available with liquid and air cooling options to further reduce facility energy costs related to cooling. The company's storage and data platforms received updates as well. Dell Project Lightning, described by the company as the world's fastest parallel file system based on internal testing, is said to provide double the throughput compared to competing systems, which could accelerate AI training times for large-scale and complex workflows. Enhancements to the Dell Data Lakehouse are designed to simplify AI workflows by enabling the creation and querying of AI-ready datasets for use cases such as recommendation engines, semantic search, and customer intent detection. "We're excited to work with Dell to support our cutting-edge AI initiatives, and we expect Project Lightning to be a critical storage technology for our AI innovations," Dr. Paul Calleja, Director, Cambridge Open Zettascale Lab and Research Computing Services, University of Cambridge, commented. In networking, Dell announced Linear Pluggable Optics, intended to lower power consumption and reduce latency for high-performance computing and AI deployments. The company also introduced AI Security and Resilience Services, which aim to provide end-to-end protection across AI infrastructure, data, applications, and models. Expansion of Dell's AI partner ecosystem was also outlined, connecting organisations with AI solutions from companies including Cohere, Google, Meta, Glean, and Mistral AI. These partnerships facilitate deployment of enterprise search, agent-based AI applications, and on-premises AI models in a secure environment. Dell also revealed joint engineering efforts with AMD and Intel, supporting new hardware stacks such as AMD ROCm and Intel Gaudi 3 AI accelerators for AI infrastructure. "It has been a non-stop year of innovating for enterprises, and we're not slowing down. We have introduced more than 200 updates to the Dell AI Factory since last year. Our latest AI advancements — from groundbreaking AI PCs to cutting-edge data centre solutions — are designed to help organisations of every size to seamlessly adopt AI, drive faster insights, improve efficiency and accelerate their results," Jeff Clarke, Chief Operating Officer at Dell Technologies, said. "We leverage the Dell AI Factory for our oceanic research at Oregon State University to revolutionise and address some of the planet's most critical challenges. Through advanced AI solutions, we're accelerating insights that empower global decision-makers to tackle climate change, safeguard marine ecosystems and drive meaningful progress for humanity," Christopher M. Sullivan, Director of Research and Academic Computing for the College of Earth, Ocean and Atmospheric Sciences at Oregon State University, said. These announcements collectively aim to address industry needs related to data quality, deployment costs, and security, while supporting the transition of AI projects into production environments for organisations worldwide.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store