Latest news with #SuperNODE
Yahoo
4 days ago
- Business
- Yahoo
GigaIO Secures $21M to Scale AI Inferencing Infrastructure Solutions in Series B First Close
Investment accelerates production of SuperNODE and Gryf, empowering every accelerator to meet the surging demand for AI inferencing. CARLSBAD, Calif., July 17, 2025--(BUSINESS WIRE)--GigaIO, a leading provider of scalable infrastructure specifically designed for AI inferencing, today announced it has raised $21 million in the first tranche of its Series B financing. The round was led by Impact Venture Capital, with participation from CerraCap Ventures, G Vision Capital, Mark IV Capital, and SourceCode Cerberus. The new funding will enable GigaIO to expand production of its flagship products SuperNODE™ and Gryf™ and accelerate innovation with a clear focus on AI inferencing. Funding will be used to: Ramp up production of SuperNODE, the most cost-effective and energy-efficient infrastructure designed for AI inferencing at scale. Accelerate the deployment of Gryf, the world's first carry-on suitcase-sized AI inferencing supercomputer, which brings datacenter-class computing power directly to the edge. Invest in new product development to broaden GigaIO's technology offerings. Expand the sales and marketing teams to serve the increasing demand for vendor-agnostic AI infrastructure. "We are thrilled to achieve this milestone with the support of Impact Venture Capital and our investors," said Alan Benjamin, CEO of GigaIO. "With SuperNODE and Gryf, we have created a new paradigm for cost-effective and power-efficient AI inferencing infrastructure. Our vendor-agnostic platform uniquely frees customers from dependency on single-source AI chips and architectures. Whether it's GPUs from NVIDIA and AMD or new AI chips from innovators like Tenstorrent and d-Matrix, GigaIO enables customers to leverage the best technologies without vendor lock-in. This funding gives us the fuel to move faster and meet the surging demand." GigaIO's patented FabreX™ AI memory fabric architecture enables the scale-up and dynamic composition of compute, GPU, storage, and networking resources — unlocking performance and cost efficiencies that traditional architectures are unable to deliver. As AI models grow larger and more complex, FabreX provides the flexibility needed to scale infrastructure on demand, at the rack level and beyond. "GigaIO's innovative infrastructure is precisely what businesses need to harness the transformative power of AI," said Jack Crawford, Founding General Partner at Impact Venture Capital. "As enterprises and cloud providers race to deploy AI at scale, GigaIO delivers a uniquely flexible, cost-effective, and energy-efficient solution that accelerates time to insight. We believe GigaIO has assembled a world-class team and is poised to become a foundational pillar of tomorrow's AI-powered infrastructure, and we're proud to back their vision." GigaIO plans to complete a second close of the Series B in the coming months, citing continued strong interest from strategic and financial investors. Rockefeller Capital Management's Technology Investment Banking division served as the exclusive advisor to GigaIO in the transaction. About GigaIO GigaIO redefines scalable AI infrastructure, seamlessly bridging from edge to core with a dynamic, open platform built for every accelerator. Reduce power draw with GigaIO's SuperNODE, the world's most powerful and energy-efficient scale-up AI computing platform. Run AI jobs anywhere with Gryf, the world's first suitcase-sized AI supercomputer that brings datacenter-class computing power directly to the edge. Both are easy to deploy and manage, utilizing GigaIO's patented AI fabric that provides ultra-low latency and direct memory-to-memory communication between GPUs for near-perfect scaling for AI workloads. Visit or follow on Twitter (X) and LinkedIn. View source version on Contacts Shannon Biggs760-487-8395shannon@ Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Business Wire
4 days ago
- Business
- Business Wire
GigaIO Secures $21M to Scale AI Inferencing Infrastructure Solutions in Series B First Close
CARLSBAD, Calif.--(BUSINESS WIRE)--GigaIO, a leading provider of scalable infrastructure specifically designed for AI inferencing, today announced it has raised $21 million in the first tranche of its Series B financing. The round was led by Impact Venture Capital, with participation from CerraCap Ventures, G Vision Capital, Mark IV Capital, and SourceCode Cerberus. $21M in funding will help GigaIO scale AI inferencing infrastructure solutions that empower every accelerator, meeting surging industry demand for AI inferencing. The new funding will enable GigaIO to expand production of its flagship products SuperNODE™ and Gryf™ and accelerate innovation with a clear focus on AI inferencing. Funding will be used to: Ramp up production of SuperNODE, the most cost-effective and energy-efficient infrastructure designed for AI inferencing at scale. Accelerate the deployment of Gryf, the world's first carry-on suitcase-sized AI inferencing supercomputer, which brings datacenter-class computing power directly to the edge. Invest in new product development to broaden GigaIO's technology offerings. Expand the sales and marketing teams to serve the increasing demand for vendor-agnostic AI infrastructure. 'We are thrilled to achieve this milestone with the support of Impact Venture Capital and our investors,' said Alan Benjamin, CEO of GigaIO. 'With SuperNODE and Gryf, we have created a new paradigm for cost-effective and power-efficient AI inferencing infrastructure. Our vendor-agnostic platform uniquely frees customers from dependency on single-source AI chips and architectures. Whether it's GPUs from NVIDIA and AMD or new AI chips from innovators like Tenstorrent and d-Matrix, GigaIO enables customers to leverage the best technologies without vendor lock-in. This funding gives us the fuel to move faster and meet the surging demand.' GigaIO's patented FabreX™ AI memory fabric architecture enables the scale-up and dynamic composition of compute, GPU, storage, and networking resources — unlocking performance and cost efficiencies that traditional architectures are unable to deliver. As AI models grow larger and more complex, FabreX provides the flexibility needed to scale infrastructure on demand, at the rack level and beyond. 'GigaIO's innovative infrastructure is precisely what businesses need to harness the transformative power of AI,' said Jack Crawford, Founding General Partner at Impact Venture Capital. 'As enterprises and cloud providers race to deploy AI at scale, GigaIO delivers a uniquely flexible, cost-effective, and energy-efficient solution that accelerates time to insight. We believe GigaIO has assembled a world-class team and is poised to become a foundational pillar of tomorrow's AI-powered infrastructure, and we're proud to back their vision.' GigaIO plans to complete a second close of the Series B in the coming months, citing continued strong interest from strategic and financial investors. Rockefeller Capital Management's Technology Investment Banking division served as the exclusive advisor to GigaIO in the transaction. About GigaIO GigaIO redefines scalable AI infrastructure, seamlessly bridging from edge to core with a dynamic, open platform built for every accelerator. Reduce power draw with GigaIO's SuperNODE, the world's most powerful and energy-efficient scale-up AI computing platform. Run AI jobs anywhere with Gryf, the world's first suitcase-sized AI supercomputer that brings datacenter-class computing power directly to the edge. Both are easy to deploy and manage, utilizing GigaIO's patented AI fabric that provides ultra-low latency and direct memory-to-memory communication between GPUs for near-perfect scaling for AI workloads. Visit or follow on Twitter (X) and LinkedIn.


Business Wire
02-05-2025
- Business
- Business Wire
GigaIO to Showcase Next-Generation AI Fabric Technology at ISC 2025
CARLSBAD, Calif.--(BUSINESS WIRE)--GigaIO, a pioneer in scalable edge-to-core AI platforms for all accelerators that are easy to deploy and manage, will showcase its latest innovations at ISC High Performance 2025, taking place June 10-13 in Hamburg, Germany. Visitors to stand H22 can see how GigaIO's revolutionary AI fabric technology, which seamlessly bridges from edge to core with a dynamic, open platform built for any accelerator, powers its two flagship products, SuperNODE and Gryf. "GigaIO's rail-optimized, PCIe-based AI fabric topologies offer up to 3.7x improved collective performance with an accelerator-agnostic design, ensuring adaptability across diverse AI workloads." SuperNODE is the world's most powerful and energy-efficient scale-up AI computing platform, and Gryf is the first suitcase-sized AI supercomputer that brings datacenter-class computing power directly to the edge. GigaIO's architecture, powered by its AI fabric, effortlessly integrates GPUs and inference accelerators from NVIDIA, AMD, d-Matrix, and more, enabling organizations to slash power and cooling requirements by up to 30% without compromising performance. GigaIO's AI fabric implements a native PCIe Gen5 architecture that enables direct memory-semantic communication between distributed computing resources, eliminating protocol translation overhead while maintaining sub-microsecond latencies for GPU-to-GPU transfers. This enables AI workloads to achieve near-linear scaling across pooled accelerators that appear as if locally attached to the host. GigaIO's groundbreaking paper, 'Rail Optimized PCIe Topologies for LLMs,' was selected for presentation at ISC 2025 on Thursday, 12 June 2025, from 9:00am to 9:25am in Hall F (2nd floor). This research explores optimized network architectures for large language model training and inference. Scaling LLMs efficiently requires innovative approaches to GPU interconnects, and GigaIO's rail-optimized, PCIe-based AI fabric topologies offer up to 3.7x improved collective performance with an accelerator-agnostic design, ensuring adaptability across diverse AI workloads. 'ISC 2025 arrives at a critical juncture, as AI workloads demand unprecedented hardware resources, making optimized infrastructure essential for organizations to achieve their performance targets,' said Alan Benjamin, CEO of GigaIO. 'Our expanded conference participation will demonstrate how our PCIe-based fabric technology delivers superior performance for LLM training and inference, while dramatically reducing power consumption and total cost of ownership.' Stop by stand H22 at ISC 2025 or schedule a meeting during the event with the GigaIO team. About GigaIO GigaIO redefines scalable AI infrastructure, seamlessly bridging from edge to core with a dynamic, open platform built for every accelerator. Reduce power draw with GigaIO's SuperNODE, the world's most powerful and energy-efficient scale-up AI computing platform. Run AI jobs anywhere with Gryf, the world's first suitcase-sized AI supercomputer that brings datacenter-class computing power directly to the edge. Both are easy to deploy and manage, utilizing GigaIO's patented AI fabric that provides ultra-low latency and direct memory-to-memory communication between GPUs for near-perfect scaling for AI workloads. Visit or follow on Twitter (X) and LinkedIn.


Business Wire
01-05-2025
- Business
- Business Wire
GigaIO and d-Matrix Advance Strategic Collaboration to Build World's Most Efficient Scalable Inference Solution for Enterprise AI Deployment
CARLSBAD, Calif.--(BUSINESS WIRE)--GigaIO, a pioneer in scalable edge-to-core AI platforms for all accelerators that are easy to deploy and manage, today announced the next phase of its strategic partnership with d-Matrix to deliver the world's most expansive inference solution for enterprises deploying AI at scale. Integrating d-Matrix's revolutionary Corsair inference platform into GigaIO's SuperNODE architecture creates an unparalleled solution that eliminates the complexity and performance bottlenecks traditionally associated with large-scale AI inference deployment. "Combining GigaIO's scale-up AI architecture with d-Matrix's purpose-built inference acceleration technology delivers unprecedented token generation speeds and memory bandwidth, while significantly reducing power consumption and total cost of ownership." Share This joint solution addresses the growing demand from enterprises for high-performance, energy-efficient AI inference capabilities that can scale seamlessly without the typical limitations of multi-node configurations. Combining GigaIO's industry-leading scale-up AI architecture with d-Matrix's purpose-built inference acceleration technology produces a solution that delivers unprecedented token generation speeds and memory bandwidth, while significantly reducing power consumption and total cost of ownership. Revolutionary Performance Through Technological Integration The new GigaIO SuperNODE platform, capable of supporting dozens of d-Matrix Corsair accelerators in a single node, is now the industry's most scalable AI inference platform. This integration enables enterprises to deploy ultra-low-latency batched inference workloads at scale without the complexity of traditional distributed computing approaches. 'By combining d-Matrix's Corsair PCIe cards with the industry-leading scale-up architecture of GigaIO's SuperNODE, we've created a transformative solution for enterprises deploying next-generation AI inference at scale,' said Alan Benjamin, CEO of GigaIO. 'Our single-node server eliminates complex multi-node configurations and simplifies deployment, enabling enterprises to quickly adapt to evolving AI workloads while significantly improving their TCO and operational efficiency.' The combined solution delivers exceptional performance metrics that redefine what's possible for enterprise AI inference: Processing capability of 30,000 tokens per second at just 2 milliseconds per token for models like Llama3 70B Up to 10x faster interactive speed compared with GPU-based solutions 3x better performance at a similar total cost of ownership 3x greater energy efficiency for more sustainable AI deployments 'When we started d-Matrix in 2019, we looked at the landscape of AI compute and made a bet that inference would be the largest computing opportunity of our lifetime,' said Sid Sheth, founder and CEO of d-Matrix. 'Our collaboration with GigaIO brings together our ultra-efficient in-memory compute architecture with the industry's most powerful scale-up platform, delivering a solution that makes enterprise-scale generative AI commercially viable and accessible.' This integration leverages GigaIO's cutting-edge PCIe Gen 5-based AI fabric, which delivers low-latency communication between multiple d-Matrix Corsair accelerators with near-zero latency. This architectural approach eliminates the traditional bottlenecks associated with distributed inference workloads while maximizing the efficiency of d-Matrix's Digital In-Memory Compute (DIMC) architecture, which delivers an industry-leading 150 TB/s memory bandwidth. Industry Recognition and Performance Validation This partnership builds on GigaIO's recent achievement of recording the highest tokens per second for a single node in the MLPerf Inference: Datacenter benchmark database, further validating the company's leadership in scale-up AI infrastructure. 'The market has been demanding more efficient, scalable solutions for AI inference workloads that don't compromise performance,' added Benjamin. 'Our partnership with d-Matrix brings together the tremendous engineering innovation of both companies, resulting in a solution that redefines what's possible for enterprise AI deployment.' Those interested in early access to SuperNODEs running Corsair accelerators can indicate interest here. About GigaIO GigaIO redefines scalable AI infrastructure, seamlessly bridging from edge to core with a dynamic, open platform built for every accelerator. Reduce power draw with GigaIO's SuperNODE, the world's most powerful and energy-efficient scale-up AI computing platform. Run AI jobs anywhere with Gryf, the world's first suitcase-sized AI supercomputer that brings datacenter-class computing power directly to the edge. Both are easy to deploy and manage, utilizing GigaIO's patented AI fabric that provides ultra-low latency and direct memory-to-memory communication between GPUs for near-perfect scaling for AI workloads. Visit or follow on Twitter (X) and LinkedIn. About d-Matrix d-Matrix is transforming the economics of large-scale inference with the world's most efficient AI computing platform for inference in data centers. The company's Corsair platform leverages innovative Digital In-Memory Compute (DIMC) architecture to accelerate AI inference workloads with industry-leading real-time performance, energy efficiency, and cost savings compared to GPUs and other alternatives. d-Matrix delivers ultra-low latency without compromising throughput, unlocking the next wave of Generative AI use cases while enabling commercially viable AI computing that scales with model size to empower companies of all sizes and budgets. For more information, visit