logo
#

Latest news with #SvenOehme

DDN Wins Prestigious Award for the Most Innovative Customer Implementation at FMS: The Future of Memory and Storage
DDN Wins Prestigious Award for the Most Innovative Customer Implementation at FMS: The Future of Memory and Storage

Business Wire

time06-08-2025

  • Business
  • Business Wire

DDN Wins Prestigious Award for the Most Innovative Customer Implementation at FMS: The Future of Memory and Storage

SANTA CLARA, Calif.--(BUSINESS WIRE)-- DDN, the global leader in AI and data intelligence solutions, proudly announces that it has received a Best in Show honor at FMS: the Future of Memory and Storage for the Most Innovative Customer Implementation—recognizing the company's continued leadership in advancing the performance, scalability, and efficiency of AI and data-intensive environments. The prestigious award underscores DDN's unmatched ability to deliver innovation across both technology and customer success, especially in the era of generative AI, where storage performance and data movement are critical to business outcomes. The Best of Show for the Most Innovative Customer Implementation award was presented to DDN for its work with CINECA, one of the world's premier supercomputing centers. At the heart of the collaboration is CINECA's groundbreaking nuclear research—paving the way for clean, safe, and sustainable energy that is foundational to the future of AI and our planet. Leveraging the DDN Data Intelligence Platform, CINECA is also advancing pioneering medical discoveries, enhancing early warning systems for natural disasters, and supporting efforts to digitally restore and preserve cultural heritage. This award underscores how DDN is eliminating data bottlenecks, accelerating time to insight, and enabling critical global research across hybrid HPC and AI environments. 'The real challenge in AI isn't compute—it's moving and managing data fast enough to keep up,' said Sven Oehme, Chief Technology Officer at DDN. 'This recognition underscores the innovation we've brought to bear in real-world deployments—solving for the bottlenecks that stall AI at scale. From national labs to global enterprises, our platforms are eliminating GPU waste, accelerating AI pipelines, and delivering measurable outcomes across science, healthcare, and industry. We're proud to help customers turn infrastructure into insight faster than ever before.' 'Nuclear fusion is increasingly being considered as a viable source of clean, limitless energy to solve the world's increasing needs for energy security and sufficiency,' said Jay Kramer, Chair of the Awards Program and President of Network Storage Advisors Inc. 'We are proud to recognize Cineca, Italy's national supercomputing center and one of Europe's most powerful AI/ HPC facilities partnered with the DDN Data Intelligence Platform. The DDN solution delivers sub-millisecond latency and TBs throughout for real-time plasma simulation and modeling, along with real-time metadata indexing. This ensures that simulations run continuously and efficiently. DDN provides built-in multi-tenancy, encryption, and a fault-tolerant architecture so that Cineca can scale securely from terabytes to exabytes, supporting thousands of compute units without compromising reliability.' Held annually in the heart of Silicon Valley, FMS: the Future of Memory and Storage is the world's premier event for storage, memory, and high-speed data infrastructure. The 'Best of Show' awards represent the highest recognition for product and customer innovation in the space, spotlighting technologies that are reshaping the future of computing. For more information about DDN and its award-winning solutions, please visit About DDN DDN is the world's leading AI and data intelligence company, empowering organizations to maximize the value of their data with end-to-end HPC and AI-focused solutions. Its customers range from the largest global enterprises and AI hyperscalers to cutting-edge research centers, all leveraging DDN's proven data intelligence platform for scalable, secure, and high-performance AI deployments that drive 10x returns. Follow DDN: LinkedIn, X, and YouTube. FMS: the Future of Memory and Storage, produced by Conference ConCepts, is the premier global event showcasing cutting-edge developments in multi-billion-dollar high-speed memory and storage technologies. As the world's largest conference and exhibition in this sector, FMS highlights mainstream applications, breakthrough innovations, key enabling technologies, and the full spectrum of players—from leading vendors to disruptive startups. This event spans critical application areas including AI, enterprise data centers, high-performance computing, mobile devices at the edge, and embedded systems. FMS serves as a dynamic hub where technology professionals, executive leaders, customers, cloud providers, hyperscaler companies, and industry analysts converge to explore the evolving landscape of memory and storage. With a renewed commitment to inclusivity and innovation, FMS is shaping the future of storage—especially at its intersection with artificial intelligence.

Five Expensive Myths About AI Inferencing (And How To Fix Them)
Five Expensive Myths About AI Inferencing (And How To Fix Them)

Forbes

time25-06-2025

  • Business
  • Forbes

Five Expensive Myths About AI Inferencing (And How To Fix Them)

Sven Oehme, Chief Technology Officer (CTO) at DDN, drives innovation across both current and future products. The AI boom shows no signs of slowing, but while training gets most of the headlines, it's inferencing where the real business impact happens. Every time a chatbot answers, a fraud alert triggers or a recommendation pops up, that's inferencing at work: models applying what they've learned to fresh data, often in real time. Inference isn't a background process. It's the front line of customer experience, risk mitigation and operational decision making. Yet many organizations still treat inference as an afterthought. This mistake can quietly sabotage performance, inflate costs and undermine AI strategies. Here are five common misconceptions about AI inferencing and what leaders can do differently to future-proof their infrastructure. 1. 'Training is the hard part—inference is easy.' The reality: Training happens occasionally. Inference happens continuously. Once a model is deployed, inference workloads don't just run once; they run millions (sometimes billions) of times a day. This scale fundamentally changes the economic equation: Over the life of a production AI system, inference often consumes the majority of infrastructure resources and budgets. Consider financial services: Detecting fraud across millions of daily transactions requires high-speed, low-latency inference at massive scale. A delay of even a few milliseconds can translate into missed opportunities or real financial losses. What To Do: • Monitor and optimize GPU utilization beyond training phases. • Architect systems to feed inference engines consistently and efficiently. • Design infrastructure specifically for high-frequency, real-time operations, not just batch processing. 2. 'Our storage is fast enough.' The reality: Traditional storage architectures aren't built for AI inference at scale. High-performance inferencing demands real-time access to massive, often unstructured datasets—images, video, embeddings or live sensor data. General-purpose NAS or object storage solutions, while fine for archival or transactional workloads, often can't meet the concurrency and throughput demands of production AI systems. In healthcare, for example, AI-assisted medical imaging requires inferencing with minimal delay. Storage-induced latency isn't just an inconvenience; it can delay diagnoses. What To Do: • Prioritize parallel file systems and storage designed for AI data patterns. • Build for concurrent data access and real-time throughput, not just static speed benchmarks. • Evaluate storage performance under live AI workload simulations, not synthetic tests. 3. 'We'll optimize inference performance later.' The reality: Deferred optimization leads to baked-in inefficiencies. Once models go live, any infrastructure gaps, such as latency, underutilized GPUs and storage bottlenecks, are exponentially harder and more expensive to fix. Poor early decisions often show up as growing technical debt, operational slowdowns and cost overruns. In industries like retail, where real-time LLM-powered agents increasingly handle customer interactions, a few hundred milliseconds of added latency can translate into lost sales or degraded brand experience. What To Do: • Build high-performance data pipelines before models go into production. • Design systems that scale seamlessly under live inference loads. • Automate performance monitoring from day one, especially GPU and storage utilization. 4. 'Cloud storage scales inference just fine.' The reality: Cloud storage is flexible but can become a major bottleneck for inference. Cloud object stores often introduce unpredictable latencies and steep egress fees at scale, especially when serving inference workloads that demand low response times and massive concurrency. For use cases like autonomous driving or industrial inspection, these drawbacks can be dealbreakers. Cloud infrastructure excels for certain training and experimentation phases, but inference at scale often demands hybrid or edge strategies to maintain performance and cost efficiency. What To Do: • Deploy hybrid architectures that keep inference close to the data source. • Optimize for low-latency edge access and minimize unnecessary data transfers. • Balance flexibility with performance and cost predictability. 5. 'Edge inferencing is optional. We'll just send data to the cloud.' The reality: In many sectors, local inferencing is mandatory. From autonomous vehicles to smart factories, edge inferencing reduces response times, cuts costs and ensures resilience even when network connections are imperfect. Sending everything to centralized clouds for processing often introduces unacceptable lag, measured in lost opportunities, safety risks or operational disruptions. For example, in manufacturing, detecting an assembly line anomaly needs to happen within milliseconds. Cloud-based roundtrips simply aren't fast enough. What To Do: • Invest in edge-ready AI infrastructure with local inferencing capabilities. • Ensure models can operate independently while staying connected for updates and telemetry. • Prioritize high-throughput, low-power solutions suited for field deployments. The Bottom Line AI success isn't just about how well you train models. It's about how reliably and efficiently you can deploy them in the real world, under real-time conditions, at real-world scale. Organizations that take inferencing seriously—architecting from the start for speed, scalability and resilience—will unlock far more value from their AI investments. Those who treat it as an afterthought risk finding that their smartest models never reach their full potential. In today's AI economy, the real winners won't be those who build the biggest models. They'll be the ones who deploy them better. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DDN Expands AI Data Infrastructure for Enterprises and Announces Support for NVIDIA Blackwell-Based Systems
DDN Expands AI Data Infrastructure for Enterprises and Announces Support for NVIDIA Blackwell-Based Systems

Yahoo

time18-03-2025

  • Business
  • Yahoo

DDN Expands AI Data Infrastructure for Enterprises and Announces Support for NVIDIA Blackwell-Based Systems

CHATSWORTH, Calif., March 18, 2025--(BUSINESS WIRE)--DDN, the global leader in AI and data intelligence solutions, today announced it is integrating the NVIDIA AI Data Platform reference design with DDN EXAScaler® and DDN Infinia 2.0 — part of the DDN AI Data Intelligence Platform — to power a new wave of agentic AI applications in the enterprise. Additionally, DDN is announcing formal support for NVIDIA Blackwell-based infrastructure, including NVIDIA DGX and NVIDIA HGX systems. These offerings further strengthen DDN's commitment to delivering high-performance, scalable AI data intelligence solutions that seamlessly integrate with advanced NVIDIA AI infrastructure. "As AI innovation advances with agentic reasoning, the need for high-speed, efficient data for inference has never been greater," said Sven Oehme, CTO of DDN. "With the integration of the NVIDIA AI Data Platform reference design, our NVIDIA storage certifications and support for NVIDIA Blackwell-based systems, we are delivering the infrastructure for enterprises to seamlessly deploy and scale AI workloads with maximum efficiency." "Enterprises are seeking high-performance infrastructure that can run inference for demanding AI reasoning workloads," said Charlie Boyle, vice president, NVIDIA DGX platforms. "DDN's integration of the NVIDIA AI Data Platform reference design and storage certifications for NVIDIA accelerated infrastructure provide a foundation for enterprises to use their data to fuel the agentic AI applications that are transforming industries." Transform Data into Knowledge for Enterprise AI with NVIDIA AI Data Platform Enterprises generate and store vast amounts of unstructured data, much of it inaccessible to modern AI applications. To unlock its full potential, enterprise AI infrastructure must evolve to support real-time data processing, intelligent data storage, and seamless access to business knowledge. DDN's AI Data Intelligence Platform, powered by NVIDIA accelerated computing, networking, and software, is designed to meet these challenges by providing: Ultra-fast parallel infrastructure to harness the unprecedented computing power of the NVIDIA Blackwell architecture Seamless integration with NVIDIA BlueField-3 DPUs and NVIDIA Spectrum-X networking to eliminate bottlenecks and accelerate AI data movement Proven, intelligent data processing for AI reasoning, built on NVIDIA AI Enterprise software including NVIDIA NIM and NeMo Retriever microservices Scalability across cloud, core, and edge for AI-driven enterprises Proven reference architectures that simplify deployment, reduce integration complexity, and accelerate AI-driven insights By leveraging AI-native data intelligence technologies like DDN EXAScaler and DDN Infinia, alongside NVIDIA's end-to-end AI computing stack, enterprises can unlock the full potential of their data while maximizing infrastructure efficiency, eliminating I/O bottlenecks, and streamlining AI data pipelines for near real-time inferencing. Optimized Performance for NVIDIA Blackwell-Based AI Systems DDN's data intelligence solutions are now fully optimized and certified for NVIDIA DGX SuperPOD with DGX GB200 and DGX B200 systems, NVIDIA Cloud Partner Reference Architectures, and NVIDIA-Certified Storage for accelerated infrastructure from leading server providers, offering: A certified storage solution for Blackwell-based NVIDIA DGX SuperPOD with DGX B200 and DGX GB200 systems Certified high-performance storage for the NCP Reference Architecture with NVIDIA GB200 NVL72 to ensure optimal AI performance Support for NVIDIA's latest networking technologies, including NVIDIA Quantum-2 InfiniBand and NVIDIA Spectrum-X for AI storage NVIDIA-Certified Storage to support NVIDIA Enterprise Reference Architectures and NVIDIA-Certified servers from system partners for building enterprise AI factories with a wide range of accelerated computing platforms Integration with leading OEM systems, including NVIDIA HGX B200-based servers from Supermicro (model details pending), ensuring seamless interoperability DDN Earns NVIDIA-Certified Storage Designation DDN is proud to announce that it has achieved the new NVIDIA-Certified Enterprise Storage designation for enterprise AI factory deployments with NVIDIA Enterprise Reference Architectures, solidifying its position as a key enabler of enterprise AI factories with NVIDIA-Certified Systems from leading system partners. As AI workloads scale in complexity, seamless data access and ultra-fast performance are essential for training, inference, and reasoning. This new certification program ensures that enterprise storage solutions meet the highest standards for performance, scalability, security, and efficiency, providing organizations with a trusted foundation for AI infrastructure. DDN's certification validates its ability to eliminate bottlenecks, optimize GPU utilization, and support large-scale AI workloads with enterprise-grade high-performance storage. By collaborating with NVIDIA Cloud Partners (NCPs), AI hyperscalers, and OEM providers such as Supermicro and Lenovo, DDN continues to pioneer the future of AI data intelligence. DDN's deep integration with NVIDIA networking, optimized AI workflows, and end-to-end data intelligence platform differentiate it from traditional storage solutions, providing unparalleled AI scalability and performance. Learn More For more information on how DDN's validated AI Data Intelligence Platform supports NVIDIA Blackwell-based systems and NVIDIA DGX SuperPOD, visit About DDN DDN is the world's leading AI and data intelligence company, empowering organizations to maximize the value of their data with end-to-end HPC and AI-focused solutions. Its customers range from the largest global enterprises and AI hyperscalers to cutting-edge research centers, all leveraging DDN's proven data intelligence platform for scalable, secure, and high-performance AI deployments that drive 10x returns. Follow DDN: LinkedIn, X, and YouTube. View source version on Contacts DDN Media Contact:Amanda LeeVP, Marketing – Analyst and Media Relationsamlee@ Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store