logo
Dell unveils AI Data Platform upgrades with NVIDIA & Elastic

Dell unveils AI Data Platform upgrades with NVIDIA & Elastic

Techday NZa day ago
Dell Technologies has announced enhancements to the Dell AI Data Platform, expanding its support across the full lifecycle of artificial intelligence workloads with new hardware and software collaborations.
The updates to the Dell AI Data Platform aim to address the challenges enterprises face with massive, rapidly growing, and unstructured data pools. Much of this data is unsuitable for generative AI applications unless it can be properly indexed and retrieved in real time. The latest advancements are designed to streamline data ingestion, transformation, retrieval, and computing tasks within enterprise environments.
Lifecycle management
The Dell AI Data Platform now provides improved automation for data preparation, enabling enterprises to move more quickly from experimental phases to deployment in production environments. The architecture is anchored by specialised storage and data engines, designed to connect AI agents directly to quality enterprise data for analytics and inferencing.
The platform incorporates the NVIDIA AI Data Platform reference architecture, providing a validated, GPU-accelerated solution that combines storage, compute, networking, and AI software for generative AI workflows.
New partnerships
An important component of the update is the introduction of an unstructured data engine, the result of collaboration with Elastic. This engine offers customers advanced vector search, semantic retrieval, and hybrid keyword search capabilities, underpinned by built-in GPU acceleration for improved inferencing and analytics performance.
The unstructured data engine operates alongside other data tools, including a federated SQL engine for querying structured data, a large-scale processing engine for data transformation, and fast-access AI-ready storage. The array of tools is designed to turn large, disparate datasets into actionable insights for AI applications.
Server integration
Supporting these software advancements are the new Dell PowerEdge R7725 and R770 servers, fitted with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. Dell claims these air-cooled servers provide improved price-to-performance for enterprise AI workloads, supporting a diverse range of use cases from data analytics and visual computing to AI inferencing and simulation.
The NVIDIA RTX PRO 6000 GPU supports up to six times the token throughput for large language model inference, offers double the capacity for engineering simulations, and can handle four times the number of concurrent users compared to the previous generation. The integration of these GPUs in a 2U server chassis is positioned to make high-density AI calculations more accessible to a wider base of enterprise users.
The Dell PowerEdge R7725 will be the first 2U server platform to deliver the NVIDIA AI Data Platform reference design, allowing organisations to deploy a unified hardware and software solution without the need for in-house architecture and testing. This is expected to enable enterprises to accelerate inferencing, achieve more responsive semantic searching, and support larger and more complex AI operations.
Industry perspectives "The key to unlocking AI's full potential lies in breaking down silos and simplifying access to enterprise data," said Arthur Lewis, president, Infrastructure Solutions Group, Dell Technologies. "Collaborating with industry leaders like NVIDIA and Elastic to advance the Dell AI Data Platform will help organisations accelerate innovation and scale AI with confidence."
Justin Boitano, Vice President of Enterprise AI at NVIDIA, added, "Enterprises worldwide need infrastructure that handles the growing scale and complexity of AI workloads. With NVIDIA RTX PRO 6000 GPUs in new 2U Dell PowerEdge servers, organisations now have a power efficient, accelerated computing platform to power AI applications and storage on NVIDIA Blackwell."
Ken Exner, Chief Product Officer at Elastic, commented, "Fast, accurate, and context-aware access to unstructured data is key to scaling enterprise AI. With Elasticsearch vector database at the heart of the Dell AI Data Platform's unstructured data engine, Elastic will bring vector search and hybrid retrieval to a turnkey architecture, enabling natural language search, real-time inferencing, and intelligent asset discovery across massive datasets. Dell's deep presence in the enterprise makes them a natural partner as we work to help customers deploy AI that's performant, precise, and production-ready."
Availability
The unstructured data engine for the Dell AI Data Platform is scheduled for availability later in the year. The Dell PowerEdge R7725 and R770 servers with NVIDIA RTX PRO 6000 GPUs will also become globally available in the same period.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How optimisation is helping to tackle the data centre efficiency challenge
How optimisation is helping to tackle the data centre efficiency challenge

Techday NZ

time5 hours ago

  • Techday NZ

How optimisation is helping to tackle the data centre efficiency challenge

In the era of cloud adoption and AI, the demand for data centre bandwidth has skyrocketed, leading to the exponential sprawl of data centres worldwide. However, new data centres are running up against sustainability, space and budget constraints. Policymakers recognise the benefits of data centres to productivity, economic growth and research, but there is still a tension over their impact on local communities, water and electricity use. The best solution is in optimising the data centre infrastructure we have already to unlock more performance while still being mindful of the limits we have. Our cities, our consumer products and our world is going to become more digital and we need more compute to keep up. Optimising the data centre infrastructure we have already to unlock more performance is the best way data centres can turn constraints into an opportunity for a competitive advantage. Why data centre optimisation matters CIOs and IT leaders increasingly face calls to provide a high-performance foundational compute infrastructure across their businesses and handle new, more demanding, use cases while balancing sustainability commitments, space and budget constraints. Many have sought to build new data centres outright to meet demand and pair them with energy efficient technologies to minimise their environmental impact. For example, the LUMI (Large Unified Modern Infrastructure) Supercomputer, one of the most powerful in Europe uses 100% carbon-free hydroelectric energy for its operations and its waste heat is reused to heat homes in the nearby town of Kajanni, Finland. There are many other examples like LUMI showing the considerable progress the data centre industry have made in addressing the need for energy efficiency. Yet energy efficiency alone won't be enough to power the growing demands of AI which is expected to plump up data centre storage capacity. AI's greater energy requirements will also require more energy efficient designs to help ensure scalability and address environmental goals and with data centre square footage, land and power grids nearing capacity, one way to optimise design is to upgrade from old servers. Data centres are expensive investments, and some CIOs and IT leaders try to recoup costs by running their hardware for as long as possible. As a result, most data centres are still using hardware that is 10 years old (Dell) and only expand compute when absolutely necessary. While building new data centres might be necessary for some, there are significant opportunities to upgrade existing infrastructure. Upgrading to newer systems means data centres can achieve the same tasks more efficiently. Global IT data centre capacity will grow from 180 Gigawatts (GW) in 2024 to 296 GW in 2028, representing a 12.3% CAGR, while electricity consumption will grow at a higher rate 23.3% from 397 Terawatt hours (TWh) to 915 TWh in 2028. For the ageing data centres, that can translate to fewer racks and systems to manage, while still maintaining the same bandwidth. It can leave significant room for future IT needs but also makes room for experimentation which is absolutely necessary in AI workloads at the moment. They can use the space to build less expensive proof of concept half racks before it leads to bigger build outs and use new hyper-efficient chips to help reduce energy consumption and cooling requirements, recouping investment back more quickly. What to look for in an upgrade There are many factors to consider in a server upgrade and there isn't a one size fits all solution to data centre needs. It's not just about buying the most powerful chip that can be afforded. Yes, the significance of a good chip on energy efficiency cannot be overstated, but each data centre has different needs that will shape the hardware and software stack they need to operate most efficiently. Leading South Korean cloud provider, Kakao Enterprise, needed servers that can deliver high performance across a wide range of workloads to support its expansive range of offerings. By deploying a mixed fleet of 3rd and 4th Gen AMD EPYC processors, the company was able to reduce the server required for its total workload to 40 percent of its original fleet, while achieving increased performance by 30 percent, with a 50 percent reduction in total cost of ownership. Much like Kakao Enterprise, IT decision makers should look for providers that can deliver end-to-end data centre Infrastructure at scale combining high performance chips, networking, software and systems design expertise. For example, the right physical racks make it easy to swap in new kit as needs evolve, and having open software is equally important for getting the different pieces of the software stack from different providers talking with each other. In addition, providers that are continually investing in world class systems design and AI systems capabilities will be best positioned to accelerate enterprise AI hardware and software roadmaps. AMD, for example, recently achieved a 38× improvement in node-level energy efficiency for AI training and HPC over just five years. This translates to a 97% reduction in energy for the same performance, empowering providers and end-users alike to innovate more sustainably and at scale. Advancing the Data Centre As our reliance on digital technologies continues to grow, so too does our need for computing power. It is important to balance the need for more compute real estate with sustainability goals, and the way forward is in making the most out of the existing real estate we have. This is a big opportunity to think smartly about this and turn an apparent tension into a massive advantage. By using the right computational architecture, data centres can achieve the same tasks more efficiently, making room for the future technologies that will transform businesses and lives.

Elastic launches Logs Essentials for cost-effective cloud log analytics
Elastic launches Logs Essentials for cost-effective cloud log analytics

Techday NZ

time9 hours ago

  • Techday NZ

Elastic launches Logs Essentials for cost-effective cloud log analytics

Elastic has announced the release of Logs Essentials, a new serverless log analytics tier offered within Elastic Cloud Serverless and designed for site reliability engineers (SREs) and developers. Logs Essentials is positioned as a lower-priced service to provide teams with essential log ingestion, searching, visualisation, and alerting without the requirement to manage the underlying infrastructure. The solution is built on the same stateless architecture as Elastic Observability, providing the ability to scale automatically and without operational overhead while retaining high availability. Core features The product enables users to perform fast and precise log analytics using filters, pattern matching, alerting, rich visualisations, and ES|QL, Elastic's piped query language. According to Elastic, this feature set is designed to help SREs quickly identify and resolve issues, improving the efficiency and effectiveness of response efforts to operational incidents. Santosh Krishnan, General Manager, Observability & Security at Elastic, commented: "SREs need a hassle-free, scale-as-you-go, high-availability logging solution that empowers them to focus entirely on operational insights, not infrastructure, without the complexity of standing up and maintaining observability tooling," Santosh Krishnan, general manager, Observability & Security at Elastic. "Logs Essentials makes it easy to get started with Elastic by offering a simple, reliable path to insights at a lower entry point." Logs Essentials is designed for teams that require core log analytics capabilities but are not seeking to pay for more advanced features. When more comprehensive observability is required, there is an upgrade path to Elastic Observability Complete, which includes further workflows and feature sets. Pricing and scalability Elastic has highlighted the tier's price-optimised model, where customers pay for the data they ingest and store, rather than committing to permanent infrastructure or premium licensing. This approach aims to make log analytics accessible for organisations of varying sizes, particularly those that want to avoid fixed costs or the complexities associated with on-premises deployments. The automatic scaling feature is managed through Elastic Cloud Serverless and is intended to maintain performance as log volume changes, especially during traffic spikes or incident investigations. The stateless design is noted as being central to enabling seamless scaling and system resilience. Operational insights Elastic states that Logs Essentials supports teams in accelerating root cause analysis and in obtaining deep contextual insights, as well as proactive detection of operational issues. The service is targeted to provide a "hassle-free entry point for operational insights," according to statements in the product description included in the release. Elastic also pointed to the popularity and existing adoption of its platform in the market, citing usage by thousands of companies, including more than half of the Fortune 500. Service availability Logs Essentials is now available within Elastic Cloud. Registration is managed via the provider's standard channels, and customers are able to begin with a free trial before choosing to purchase the service. The new tier joins Elastic's portfolio of solutions that integrate search, observability, and security applications, all built upon Elastic's Search AI Platform. Users can deploy the tier without infrastructure management responsibilities, and scale their deployment as needed according to log volume and analytic requirements.

Dell unveils AI Data Platform upgrades with NVIDIA & Elastic
Dell unveils AI Data Platform upgrades with NVIDIA & Elastic

Techday NZ

timea day ago

  • Techday NZ

Dell unveils AI Data Platform upgrades with NVIDIA & Elastic

Dell Technologies has announced enhancements to the Dell AI Data Platform, expanding its support across the full lifecycle of artificial intelligence workloads with new hardware and software collaborations. The updates to the Dell AI Data Platform aim to address the challenges enterprises face with massive, rapidly growing, and unstructured data pools. Much of this data is unsuitable for generative AI applications unless it can be properly indexed and retrieved in real time. The latest advancements are designed to streamline data ingestion, transformation, retrieval, and computing tasks within enterprise environments. Lifecycle management The Dell AI Data Platform now provides improved automation for data preparation, enabling enterprises to move more quickly from experimental phases to deployment in production environments. The architecture is anchored by specialised storage and data engines, designed to connect AI agents directly to quality enterprise data for analytics and inferencing. The platform incorporates the NVIDIA AI Data Platform reference architecture, providing a validated, GPU-accelerated solution that combines storage, compute, networking, and AI software for generative AI workflows. New partnerships An important component of the update is the introduction of an unstructured data engine, the result of collaboration with Elastic. This engine offers customers advanced vector search, semantic retrieval, and hybrid keyword search capabilities, underpinned by built-in GPU acceleration for improved inferencing and analytics performance. The unstructured data engine operates alongside other data tools, including a federated SQL engine for querying structured data, a large-scale processing engine for data transformation, and fast-access AI-ready storage. The array of tools is designed to turn large, disparate datasets into actionable insights for AI applications. Server integration Supporting these software advancements are the new Dell PowerEdge R7725 and R770 servers, fitted with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. Dell claims these air-cooled servers provide improved price-to-performance for enterprise AI workloads, supporting a diverse range of use cases from data analytics and visual computing to AI inferencing and simulation. The NVIDIA RTX PRO 6000 GPU supports up to six times the token throughput for large language model inference, offers double the capacity for engineering simulations, and can handle four times the number of concurrent users compared to the previous generation. The integration of these GPUs in a 2U server chassis is positioned to make high-density AI calculations more accessible to a wider base of enterprise users. The Dell PowerEdge R7725 will be the first 2U server platform to deliver the NVIDIA AI Data Platform reference design, allowing organisations to deploy a unified hardware and software solution without the need for in-house architecture and testing. This is expected to enable enterprises to accelerate inferencing, achieve more responsive semantic searching, and support larger and more complex AI operations. Industry perspectives "The key to unlocking AI's full potential lies in breaking down silos and simplifying access to enterprise data," said Arthur Lewis, president, Infrastructure Solutions Group, Dell Technologies. "Collaborating with industry leaders like NVIDIA and Elastic to advance the Dell AI Data Platform will help organisations accelerate innovation and scale AI with confidence." Justin Boitano, Vice President of Enterprise AI at NVIDIA, added, "Enterprises worldwide need infrastructure that handles the growing scale and complexity of AI workloads. With NVIDIA RTX PRO 6000 GPUs in new 2U Dell PowerEdge servers, organisations now have a power efficient, accelerated computing platform to power AI applications and storage on NVIDIA Blackwell." Ken Exner, Chief Product Officer at Elastic, commented, "Fast, accurate, and context-aware access to unstructured data is key to scaling enterprise AI. With Elasticsearch vector database at the heart of the Dell AI Data Platform's unstructured data engine, Elastic will bring vector search and hybrid retrieval to a turnkey architecture, enabling natural language search, real-time inferencing, and intelligent asset discovery across massive datasets. Dell's deep presence in the enterprise makes them a natural partner as we work to help customers deploy AI that's performant, precise, and production-ready." Availability The unstructured data engine for the Dell AI Data Platform is scheduled for availability later in the year. The Dell PowerEdge R7725 and R770 servers with NVIDIA RTX PRO 6000 GPUs will also become globally available in the same period.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store