Latest news with #AMDInstinct™MI325X


Business Wire
2 days ago
- Business
- Business Wire
DigitalOcean and AMD Collaborate to Advance AI Using Cloud-Based GPUs
NEW YORK--(BUSINESS WIRE)-- DigitalOcean Holdings, Inc. (NYSE: DOCN), the simplest scalable cloud for digital native enterprises, today announced a collaboration with AMD that provides DigitalOcean customers with access to AMD Instinct™ GPUs as DigitalOcean GPU Droplets to power their AI workloads starting with the AMD MI300X GPUs. Later this year, DigitalOcean will offer AMD Instinct™ MI325X GPUs, further expanding access to powerful and affordable GPU models. AMD Instinct™ MI325X GPU accelerators set new AI performance standards, delivering incredible performance and efficiency for training and inference. AMD Instinct MI300X GPUs deliver leadership performance for accelerated high-performance computing (HPC) applications and the newly exploding demands of generative AI. With the AMD ROCm™ software platform, customers can develop powerful HPC and AI production-ready systems faster than ever before. Its large memory capacity allows it to hold models with hundreds of billions of parameters entirely in memory, reducing the need for model splitting across multiple GPUs. By combining powerful AMD AI compute engines and DigitalOcean's cloud technologies, the collaboration aims to empower the massive community of digital native enterprises to integrate AI into their applications and support the most demanding AI workloads at scale. These next-generation GPUs have already been available in bare metal configurations for customers seeking increased control and computing power. These GPUs are now also available as GPU Droplets or as DigitalOcean Kubernetes worker nodes. The GPU Droplets are available both as single and eight GPU configurations, allowing customers to optimize costs for their specific use cases. Accessing these GPU Droplets through DigitalOcean offers several key benefits, including competitive pricing at $1.99/GPU per hour, a simple setup process, and enterprise-grade SLAs. While other cloud providers require multiple steps and deep technical knowledge to configure security, storage, and network requirements, DigitalOcean's GPU Droplets can be set up with just a few clicks. In addition to these new GPUs, customers will also have access to AMD Developer Cloud, a new platform powered by DigitalOcean that is purpose-built for rapid, high-performance AI development. Customers will have access to a fully managed environment that provides instant access to AMD Instinct MI300X GPUs—with zero hardware investment or local setup required. Whether fine-tuning LLMs, benchmarking inference performance, or building a scalable inference stack, the AMD Developer Cloud provides the tools and flexibility to get started instantly—and grow without limits. 'DigitalOcean's collaboration with AMD is another proof point to make AI easily accessible to our customers,' said Bratin Saha, Chief Product & Technology Officer at DigitalOcean. 'With access to AMD GPUs, DigitalOcean customers have an extensive portfolio of GPUs with the flexibility of the computing configuration that best suits their requirements.' "At AMD, we are proud to work with DigitalOcean to provide developers with cutting-edge solutions for developer enablement and demanding workloads that require large amounts of memory,' said Negin Oliver, corporate vice president of business development, Data Center GPU Business, at AMD. 'Together, AMD and DigitalOcean are committed to providing the critical innovative technologies required to support the evolving needs of growing tech businesses.' To access AMD Instinct GPUs with DigitalOcean, visit the DigitalOcean website. DigitalOcean is the simplest scalable cloud platform that democratizes cloud and AI for digital native enterprises around the world. Our mission is to simplify cloud computing and AI to allow builders to spend more time creating software that changes the world. More than 600,000 customers trust DigitalOcean to deliver the cloud, AI, and ML infrastructure they need to build and scale their organizations. To learn more about DigitalOcean, visit
Yahoo
14-05-2025
- Business
- Yahoo
TensorWave Secures $100 Million Series A Funding Co-Led by Magnetar and AMD Ventures
Funding Fuels Rapid Deployment of Massive AMD Instinct MI325X GPU Training Cluster and Supports Scaling for Surging AI Infrastructure Needs LAS VEGAS, May 14, 2025--(BUSINESS WIRE)--TensorWave, the emerging leader in AMD-powered AI infrastructure solutions, today announced it has raised $100 million in Series A funding. Magnetar and AMD Ventures led the round, with continued support from Maverick Silicon, Nexus Venture Partners, and new investor Prosperity7. This funding builds on the company's earlier SAFE round and positions TensorWave to capitalize on the growing demand for next-gen AI compute infrastructure. The investment aligns with TensorWave's deployment of over 8,000 AMD Instinct™ MI325X GPUs for a dedicated training cluster, establishing the company as a key player in the AI infrastructure ecosystem. TensorWave is on track to close the year with a revenue run rate exceeding $100 million — a 20x year-over-year increase. "This $100M funding propels TensorWave's mission to democratize access to cutting-edge AI compute," said Darrick Horton, CEO of TensorWave. "Our 8,192 Instinct MI325X GPU cluster marks just the beginning as we establish ourselves as the emerging AMD-powered leader in the rapidly expanding AI infrastructure market." The new capital will fuel TensorWave's operational growth, team expansion, and the accelerated deployment of its Instinct MI325X-powered training cluster. This growth comes at a pivotal moment, as demand for AI computing resources continues to outstrip supply and organizations seek alternatives to limited infrastructure options. "The $100 million we've secured will transform how enterprises access AI computing resources," said Piotr Tomasik, President of TensorWave. "Through careful cultivation of strategic partnerships and investor relationships, we've positioned TensorWave to solve the critical infrastructure bottleneck facing AI adoption. Our Instinct MI325X cluster deployment isn't just about adding capacity, it's about creating an entirely new category of enterprise-ready AI infrastructure that delivers both the memory headroom and performance reliability that next-generation models demand." "Our focus is to continue to expand the ecosystem and support developers with the tools, infrastructure, and performance they need to create, scale, and ship production-ready AI," shares Jeff Tatarchuk, TensorWave's Chief Growth Officer. AMD Ventures' strategic investment in TensorWave reinforces the commitment of AMD to expand its footprint in the AI infrastructure space and ensures its latest technologies are available in the cloud and at-scale for leading AI companies and enterprises. "TensorWave is a key player in the growing AMD AI ecosystem," said Mathew Hein, SVP Chief Strategy Officer & Corporate Development, AMD. "Their expanding portfolio of AI and enterprise customers coupled with their expertise in deploying AMD compute infrastructure is driving demand for access to their cutting-edge AI compute services. We're excited to support their next phase of growth." "We continue to be highly impressed by what the TensorWave team has built in just a short period of time. TensorWave is not just bringing more compute but rather an entirely new class of compute to a capacity-constrained market. We think this will be highly beneficial to the AI infrastructure ecosystem writ large, and we're thrilled to continue our support of the company," said Kenneth Safar, Managing Director at Maverick Silicon. The funding comes at a time when the AI infrastructure market is experiencing unprecedented growth, with recent industry reports projecting the AI infrastructure market to exceed $400 billion by 2027. TensorWave's focus on technology-powered solutions and continued partnerships with firms like TECFusions positions the company to capture a significant portion of this expanding market. For additional information please visit: About TensorWave TensorWave is the AI and HPC cloud purpose-built for performance. Powered exclusively by AMD Instinct™ Series GPUs, we deliver high-bandwidth, memory-optimized infrastructure that scales with your most demanding models—training or inference. For more information please visit View source version on Contacts Media Contact: press@ Sign in to access your portfolio


Business Wire
14-05-2025
- Business
- Business Wire
TensorWave Secures $100 Million Series A Funding Co-Led by Magnetar and AMD Ventures
LAS VEGAS--(BUSINESS WIRE)-- TensorWave, the emerging leader in AMD-powered AI infrastructure solutions, today announced it has raised $100 million in Series A funding. Magnetar and AMD Ventures led the round, with continued support from Maverick Silicon, Nexus Venture Partners, and new investor Prosperity7. This funding builds on the company's earlier SAFE round and positions TensorWave to capitalize on the growing demand for next-gen AI compute infrastructure. Funding Fuels Rapid Deployment of Massive AMD Instinct MI325X GPU Training Cluster and Supports Scaling for Surging AI Infrastructure Needs Share The investment aligns with TensorWave's deployment of over 8,000 AMD Instinct™ MI325X GPUs for a dedicated training cluster, establishing the company as a key player in the AI infrastructure ecosystem. TensorWave is on track to close the year with a revenue run rate exceeding $100 million — a 20x year-over-year increase. "This $100M funding propels TensorWave's mission to democratize access to cutting-edge AI compute," said Darrick Horton, CEO of TensorWave. "Our 8,192 Instinct MI325X GPU cluster marks just the beginning as we establish ourselves as the emerging AMD-powered leader in the rapidly expanding AI infrastructure market.' The new capital will fuel TensorWave's operational growth, team expansion, and the accelerated deployment of its Instinct MI325X-powered training cluster. This growth comes at a pivotal moment, as demand for AI computing resources continues to outstrip supply and organizations seek alternatives to limited infrastructure options. 'The $100 million we've secured will transform how enterprises access AI computing resources," said Piotr Tomasik, President of TensorWave. "Through careful cultivation of strategic partnerships and investor relationships, we've positioned TensorWave to solve the critical infrastructure bottleneck facing AI adoption. Our Instinct MI325X cluster deployment isn't just about adding capacity, it's about creating an entirely new category of enterprise-ready AI infrastructure that delivers both the memory headroom and performance reliability that next-generation models demand.' 'Our focus is to continue to expand the ecosystem and support developers with the tools, infrastructure, and performance they need to create, scale, and ship production-ready AI,' shares Jeff Tatarchuk, TensorWave's Chief Growth Officer. AMD Ventures' strategic investment in TensorWave reinforces the commitment of AMD to expand its footprint in the AI infrastructure space and ensures its latest technologies are available in the cloud and at-scale for leading AI companies and enterprises. 'TensorWave is a key player in the growing AMD AI ecosystem,' said Mathew Hein, SVP Chief Strategy Officer & Corporate Development, AMD. 'Their expanding portfolio of AI and enterprise customers coupled with their expertise in deploying AMD compute infrastructure is driving demand for access to their cutting-edge AI compute services. We're excited to support their next phase of growth.' 'We continue to be highly impressed by what the TensorWave team has built in just a short period of time. TensorWave is not just bringing more compute but rather an entirely new class of compute to a capacity-constrained market. We think this will be highly beneficial to the AI infrastructure ecosystem writ large, and we're thrilled to continue our support of the company,' said Kenneth Safar, Managing Director at Maverick Silicon. The funding comes at a time when the AI infrastructure market is experiencing unprecedented growth, with recent industry reports projecting the AI infrastructure market to exceed $400 billion by 2027. TensorWave's focus on technology-powered solutions and continued partnerships with firms like TECFusions positions the company to capture a significant portion of this expanding market. For additional information please visit:
Yahoo
01-05-2025
- Business
- Yahoo
From Scalable Solutions to Full-Stack AI Infrastructure, GIGABYTE to Present End-to-End AI Portfolio at COMPUTEX 2025
TAIPEI, May 01, 2025--(BUSINESS WIRE)--GIGABYTE Technology, a global leader in computing innovation, will return to COMPUTEX 2025 from May 20 to 23 under the theme "Omnipresence of Computing: AI Forward." Demonstrating how GIGABYTE's complete spectrum of solutions spanning the AI lifecycle, from data center training to edge deployment and end-user applications reshapes the infrastructure to meet the next-gen AI demands. As generative AI continues to evolve, so do the demands for handling massive token volumes, real-time data streaming, and high-throughput compute environments. GIGABYTE's end-to-end portfolio - ranging from rack-scale infrastructure to servers, cooling systems, embedded platforms, and personal computing—forms the foundation to accelerate AI breakthroughs across industries. Scalable AI Infrastructure Starts Here: GIGAPOD with GPM Integration At the heart of GIGABYTE exhibit is the enhanced GIGAPOD, a scalable GPU cluster designed for high-density data center and large AI model training. Designed for high-performance AI workloads, GIGAPOD supports the latest accelerating platforms including AMD Instinct™ MI325X and NVIDIA HGX™ H200. It is now integrated with GPM (GIGABYTE POD Manager), GIGABYTE's proprietary infra and workflow management platform, which can enhance operational efficiency, streamline management, and optimize resource utilization across large-scale AI environments. This year will also see the debut of the GIGAPOD Direct Liquid Cooling (DLC) variant, incorporating GIGABYTE's G4L3 series servers and engineered for next-gen chips with TDPs exceeding 1,000W. The DLC solution is demonstrated in a 4+1 rack configuration in partnership with Kenmec, Vertiv, and nVent, featuring integrated cooling, power distribution, and network architecture. To help customers deploy faster and smarter, GIGABYTE offers end-to-end consulting services, including planning, deployment, and system validation, accelerating the path from concept to operation. Built for Deployment: From Super Compute Module to Open Compute and Custom Workloads As AI adoption shifts from training to deployment, GIGABYTE's flexible system design and architecture ensure seamless transition and expansion. GIGABYTE presents the cutting-edge NVIDIA GB300 NVL72, a fully liquid-cooled, rack-scale design that unifies 72 NVIDIA Blackwell Ultra GPUs and 36 Arm®-based NVIDIA Grace™ CPUs in a single platform optimized for test-time scaling inference. Also shown at the booth are two OCP-compliant server racks: an 8OU AI system with NVIDIA HGX™ B200 integrated with Intel® Xeon® processors, and an ORV3 CPU-based storage rack with JBOD design to maximize density and throughput. GIGABYTE also exhibits modular and diverse servers from high-performance GPU to storage-optimized to meet different AI workloads: Accelerated Compute: Air- and liquid-cooled servers for the latest AMD Instinct™MI325X, Intel® Gaudi® 3, and NVIDIA HGX™ B300 GPU platforms, optimized for GPU-to-GPU interconnects CXL Technology: CXL-enabled systems unlock shared memory pools across CPUs for real-time AI inference High-density Compute & Storage: Multi-node servers packed with high-core count CPUs and NVMe/E1.S storage, developed in collaboration with Solidigm, ADATA, Kioxia, and Seagate Cloud & Edge Platforms: Blade and node solutions optimized for power, thermal efficiency, and workload diversity—ideal for hyperscalers and managed service providers Bringing AI to the Edge—and to Everyone Extending AI to real-world applications, GIGABYTE introduces a new generation of embedded systems and mini PCs that bring compute closer to where data is generated. Jetson-Powered Embedded Systems: Featuring NVIDIA® Jetson Orin™, these rugged platforms power real-time edge AI in industrial automation, robotics, and machine vision. BRIX Mini PCs: Compact yet powerful, the latest BRIX systems include onboard NPUs and support Microsoft Copilot+ and Adobe AI tools, perfect for lightweight AI inference at the edge. Expanding leadership from cloud to the edge, GIGABYTE delivers powerful on-premises AI acceleration with our advanced Z890/X870 motherboards and cutting-edge GeForce RTX 50 and Radeon RX 9000 Series graphics cards. The innovative AI TOP local AI computing solution simplifies complex AI workflows through memory offloading and multi-node clustering capabilities. This AI innovation extends throughout our consumer lineup - from Microsoft-certified Copilot+ AI PCs and gaming powerhouses to high-refresh OLED monitors. On laptops, the exclusive "Press and Speak" GIMATE AI agent enables intuitive hardware control, enhancing both productivity and everyday AI experiences. GIGABYTE invites everyone to explore the AI Forward era, defined by scalable architecture, precision engineering, and a commitment to accelerating progress. View source version on Contacts Michael Pao brand@