Latest news with #AIinference


Forbes
2 days ago
- Business
- Forbes
Sandisk And SK Hynix Agreement Enables Fast NAND Flash To DRAM In HBM Packages
Several NAND flash manufacturers were discussing higher bandwidth flash technologies at the 2025 FMS in Santa Clara, CA, but the announcement by Sandisk and SK hynix that they would work together to create standards enabling enable high bandwidth flash in an HBM module would enable the next generation of AI training and inference with lower costs and reduced energy consumption. Sandisk announced it has signed a Memorandum of Understanding with SK hynix to work together to drive standardization of High Bandwidth Flash memory technology. HBF is a NAND flash-based solution contained in an HBM package. HBMs are widely used to support the immediate memory needs of GPUs for AI training and inference. HBMs have traditionally used dynamic random-access memory, DRAM, providing very fast access to data. An image of a flash based, HBF module architecture is shown below. HBF is a new technology designed to deliver breakthrough memory capacity and performance for the next generation of AI inferences and would supplement the traditional DRAM based HBM, providing a lower cost per capacity option to support the HBM DRAM. Through this collaboration with SK hynix, the companies aim to standardize the specification, define technology requirements, and explore the creation of a technology ecosystem for High Bandwidth Flash. As AI models grow larger and more complex, inference workloads demand both massive bandwidth and significantly greater memory capacity. Designed for AI inference workloads in large data centers, small enterprises and edge applications, HBF is targeted to offer comparable bandwidth to High Bandwidth Memory (HBM) while delivering up to 8-16x the capacity of HBM at a similar cost. In addition to providing more memory using NAND flash in HBM-like packages, NAND-flash is also a non-volatile memory while DRAM is a volatile memory. That means that DRAM requires regular refreshes of the data it contains and this consumes energy. By substituting some non-volatile memory for what otherwise would be volatile memory in GPUs and AI applications it may be possible to reduce the energy requirements for AI applications in data centers. This could open up more AI opportunities for AI applications in energy constrained data centers, thus democratizing the development of AI applications. It could also help to reduce the projected power requirements for hyperscale and other data centers to support AI development. Sandisk also announced the formation of a Technical Advisory Board to guide the development and strategy of its HBF memory technology. The board, consisting of industry experts and senior technical leaders from both within and outside of Sandisk, will provide strategic guidance, technical insight, market perspective, and shape a standards-driven ecosystem. Enabled by Sandisk's BiCS technology and proprietary CBA wafer bonding and developed over the past year with input from leading AI industry players, Sandisk's HBF technology was awarded 'Best of Show, Most Innovative Technology' at FMS: the Future of Memory and Storage 2025. Sandisk targets to deliver first samples of its HBF memory in the second half of calendar 2026 and expects samples of the first AI-inference devices with HBF to be available in early 2027. Sandisk and SK hynix announced that they would work together to create standards for high bandwidth NAND flash, HBF modules to supplement DRAM-based HBM and enabling the next generation of AI.
Yahoo
3 days ago
- Business
- Yahoo
NVIDIA, AMD, and Intel Compete for Dominance with Diverse Hardware and Strategic Partnerships
The AI Inference Market Companies Quadrant provides an in-depth analysis of the global AI inference landscape, highlighting leading players like NVIDIA, AMD, and Intel. This study examines key market trends, technological innovations, and emerging applications across sectors such as healthcare, finance, and automotive. The quadrant evaluates over 100 companies based on criteria like revenue, growth strategies, and product footprint, focusing on compute, memory, network, deployment, application, and end-user categories. With a shift towards edge computing and cloud-based solutions, the market is driven by the need for low-latency processing and efficient AI model performance. Despite challenges such as high power demands and data security concerns, the AI inference market continues to grow, fueled by advancements in GPU architectures and enhanced NLP tools. Dublin, Aug. 05, 2025 (GLOBE NEWSWIRE) -- The "AI inference - Company Evaluation Report, 2025" report has been added to AI Inference Market Companies Quadrant is a comprehensive industry analysis that provides valuable insights into the global market for AI Inference Market. This quadrant offers a detailed evaluation of key market players, technological advancements, product innovations, and emerging trends shaping the industry. The publisher's 360 Quadrants evaluated over 100 companies, of which the Top 14 AI Inference Market Companies were categorized and recognized as the quadrant inference involves deploying trained artificial intelligence models to interpret new data and generate meaningful outputs such as predictions, classifications, or recommendations. It serves as a foundational component in various real-world applications, including speech and image recognition, fraud detection, personalized content delivery, and autonomous systems. With growing adoption of AI technologies to enhance operational workflows, improve customer engagement, and foster innovation, the emphasis on efficient and scalable inference has process is supported by cutting-edge hardware accelerators, comprehensive AI frameworks, and flexible deployment models ranging from edge to cloud, ensuring minimal latency, high scalability, and cost-effectiveness across diverse industry verticals. The AI inference market is experiencing significant momentum due to the widespread implementation of AI across sectors such as healthcare, finance, automotive, and retail. The rise of edge computing is a major catalyst, enabling inference to be performed near the data source for faster decision-making and reduced network dependence. Furthermore, the growing network of IoT and connected devices has amplified the demand for robust inference capabilities to manage real-time data streams. As AI models become more complex, advancements in model optimization techniques like compression and quantization are ensuring efficient performance without escalating growth drivers include the increasing need for low-latency processing on edge devices, cloud-based platforms offering tailored AI inference solutions, and improvements in GPU architectures designed for inference workloads. Conversely, the market faces constraints such as the high power requirements of AI chips and a lack of skilled professionals capable of managing AI infrastructure. Nonetheless, emerging opportunities lie in expanding AI applications in diagnostics and healthcare, enhanced natural language processing (NLP) tools to boost customer experience, and the escalating need for real-time analytics. However, concerns around data security and supply chain disruptions remain persistent challenges for companies operating in the AI inference 360 Quadrant maps the AI Inference Market companies based on criteria such as revenue, geographic presence, growth strategies, investments, and sales strategies for the market presence of the AI Inference Market quadrant. These companies are actively investing in research and development, forming strategic partnerships, and engaging in collaborative initiatives to drive innovation, expand their global footprint, and maintain a competitive edge in this rapidly evolving Three Companies Analysis NVIDIA CorporationNVIDIA Corporation leads the AI inference market by consistently innovating its GPU technology, expanding its product portfolio, and investing in its software ecosystem. Key innovations include the development of new architectures like the Hopper GPU, which enhances AI workloads and large-scale computing. NVIDIA's strategic partnerships with cloud providers and automotive companies drive the adoption of its AI solutions across industries such as autonomous vehicles, healthcare, and edge computing. This positions NVIDIA strongly in terms of Company Market Share and Company Product Micro Devices, is increasing its market share in AI inference through high-performance GPUs and CPUs, including the Radeon Instinct GPUs and EPYC processors, which cater to AI and machine learning applications. The integration of Xilinx's FPGA technology into AMD's product line has further diversified its offerings. AMD's focus on partnerships with cloud providers and enterprise customers enhances its Company Positioning and expands its market share across various CorporationIntel Corporation strengthens its market position by developing AI-specific hardware, such as the Habana Labs Gaudi processors and edge AI capabilities through Movidius VPUs. Intel's investment in the oneAPI software platform unifies AI development, promoting easier adoption of its hardware. By fostering strategic partnerships and expanding its presence across different industries, Intel enhances its Company Analysis and Company Ranking. Intel's diversified hardware solutions cater to data centers and autonomous applications, making it a key player in the AI inference Topics Covered: 1 Introduction1.1 Market Definition1.2 Limitations1.3 Stakeholders2 Executive Summary3 Market Overview3.1 Introduction3.2 Market Dynamics3.2.1 Drivers3.2.1.1 Growing Demand for Real-Time Processing on Edge Devices3.2.1.2 Growth of Advanced Cloud Platforms Offering Specialize1 Ai Inference Services3.2.1.3 Enhanced Gpu Capabilities for Inference Tasks3.2.2 Restraints3.2.2.1 Computational Workload and High Power Consumption3.2.2.2 Shortage of Skilled Workforce3.2.3 Opportunities3.2.3.1 Growth of Ai-Enabled Healthcare and Diagnostics3.2.3.2 Advancements in Natural Language Processing Fo1 Improved Customer Experience3.2.3.3 Increasing Demand for Real-Time Data Processing and Analytics3.2.4 Challenges3.2.4.1 Data Privacy Concerns3.2.4.2 Supply Chain Disruptions3.3 Trends/Disruptions Impacting Customer Business3.4 Value Chain Analysis3.5 Ecosystem Analysis3.6 Technology Analysis3.6.1 Key Technologies3.6.1.1 Genai Workload3.6.1.2 High Bandwidth Memory (Hbm)3.6.1.3 High-Performance Computing (Hpc)3.6.2 Complementary Technologies3.6.2.1 High-Speed Interconnects3.6.2.2 Edge Computing Infrastructure3.6.2.3 Data Center Power Management and Cooling System3.6.3 Adjacent Technologies3.6.3.1 Cloud Ai Services3.6.3.2 Ai Development Frameworks3.7 Patent Analysis3.8 Key Conferences and Events, 2025-20263.9 Porter's Five Forces Analysis3.9.1 Threat of New Entrants3.9.2 Threat of Substitutes3.9.3 Bargaining Power of Suppliers3.9.4 Bargaining Power of Buyers3.9.5 Intensity of Competitive Rivalry4 Competitive Landscape4.1 Introduction4.2 Key Player Strategies/Right to Win, 2020-20244.3 Revenue Analysis, 2022-20244.4 Market Share Analysis, 20244.5 Company Valuation and Financial Metrics4.6 Brand/Product Comparison4.7 Company Evaluation Matrix: Key Players, 20244.7.1 Stars4.7.2 Emerging Leaders4.7.3 Pervasive Players4.7.4 Participants4.7.5 Company Footprint: Key Players, 20244.7.5.1 Company Footprint4.7.5.2 Compute Footprint4.7.5.3 Memory Footprint4.7.5.4 Network Footprint4.7.5.5 Deployment Footprint4.7.5.6 Application Footprint4.7.5.7 End-user Footprint4.7.5.8 Region Footprint4.8 Company Evaluation Matrix: Startups/Smes, 20244.8.1 Progressive Companies4.8.2 Responsive Companies4.8.3 Dynamic Companies4.8.4 Starting Blocks4.8.5 Competitive Benchmarking: Startups/Smes, 20244.8.5.1 Detailed List of Key Startups/Smes4.8.5.2 Competitive Benchmarking of Key Startups/Smes4.9 Competitive Scenario4.9.1 Product Launches4.9.2 Deals5 Company Profiles Nvidia Corporation Advanced Micro Devices, Inc. Intel Corporation Sk Hynix Inc. Samsung Micron Technology, Inc. Apple Inc. Qualcomm Technologies, Inc. Huawei Technologies Co., Ltd. Google Amazon Web Services, Inc. Tesla Microsoft Meta T-Head Graphcore Cerebras Mythic Blaize Groq, Inc. Hailo Technologies Ltd. Sima Technologies, Inc. Kneron, Inc. Tenstorrent Sambanova Systems, Inc. Sapeon Inc. Rebellions Inc. Shanghai Biren Technology Co., Ltd. For more information about this report visit About is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends. CONTACT: CONTACT: Laura Wood,Senior Press Manager press@ For E.S.T Office Hours Call 1-917-300-0470 For U.S./ CAN Toll Free Call 1-800-526-8630 For GMT Office Hours Call +353-1-416-8900Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Forbes
24-07-2025
- Business
- Forbes
AI Inferencing Is Growing In Importance—And RAG Is Fueling Its Rise
As the AI infrastructure market evolves, we've been hearing a lot more about AI inference—the last step in the AI technology infrastructure chain to deliver fine-tuned answers to the prompts given to an AI model, whether that's by a human or agentic AI. Inferencing will grow in importance in delivering specialized and fine-tuned AI intelligence. It requires a different technology infrastructure stack from that used for training AI models. One of the building blocks is a technology called Retrieval-Augmented Generation, which is a way to infuse large language models with additional data. RAG can also help drive down the costs of serving up AI inference. RAG is growing in importance as more enterprises need to collect and apply specific data to AI applications to ensure depth and accuracy. RAG-based models use specific custom data to apply up-to-date knowledge to generate more accurate, relevant, and trustworthy responses. In a new Futuriom research report, "RAGs to Riches: The New Era of AI Inferencing," analyst Craig Matsumoto recently dove into RAG and found it will drive a lot of technology subsegments, including vector databases, edge networking, and AI security. Why Inferencing Is Booming First, let's talk about why the inference market represents the next leg in AI infrastructure growth. Agentic AI is being unleashed by organizations worldwide. AI-driven agents can satisfy more complex queries that require multiple steps. But inferencing is the crucial step to customizing these agentic software models for specific use cases. Inferencing is the heart of enterprise AI. Enterprises will still train specialized models, but they can't reap the benefits of AI until they become experts at inference. AI infrastructure companies believe that the inference market could dwarf the size of the AI training market. NVIDIA's CEO Jensen Huang said during his GTC 2025 keynote that the rise of agentic AI, expected to be a cornerstone of inference, could push AI's computing requirements to grow roughly 100-fold in as little as a year. AMD expects the AI inference market to grow at 80% per year 'for the next few years,' CEO Lisa Su said recently, although she offered neither market sizes nor a timeframe for context. Some of these estimates are probably extreme, as it does not seem economically possible for AI capital spending to grow by an order of magnitude in a year, but there is no doubt that the center of gravity has shifted toward inference. What makes inferencing special is that it can be placed anywhere—on a chip or chips in a phone, in a manufacturing facility, or in small-to-medium-sized datacenters. It typically doesn't require the most expensive GPUs needed for LLM training. AI is seeing wide adoption across industries, but enterprises are fine-tuning LLMs for different use ... More cases. What About RAG? Inferencing brings us to RAG, which helps add relevance to LLMs and small language models for specialized applications. RAG is a less costly way to add data to AI models and a better way to accommodate dynamic, real-time data. Training is slow and expensive, whereas RAG operates on the fly during inferencing, infusing a model with information relevant to the current query. Visa, for example, has set up employees to use RAG routinely. They can query six different LLMs—popular foundation models such as ChatGPT and Claude—that sit behind Visa's firewall, accessed through one interface. Fraud detection is one obvious use case, but employees also use the LLMs with RAG to hunt for domain-specific knowledge. RAG is also useful for updating data. One problem with AI models is that they are trained over time and quickly become out-of-date. This doesn't matter if you are looking for less recent information, but it's a shortcoming for dynamic, real-time information. Databases Need an Update Applying RAG to AI models to get the best inferencing means that data architectures need to adapt. This explains why large data management companies, including Oracle, Snowflake, and Databricks, are racing to explain how their architectures are being adapted to serve AI—even though in many cases their databases were built for applications that now like legacy approaches. Vector databases are an area of growing interest. Vector databases are good at storing and labeling unstructured data, such as documents and images, using vectors and semantic search. This enables enterprises to store diverse data and perform multimedia searches. This has led major data players such as Oracle, Databricks, and Snowflake to incorporate vector support into their products. But RAG and inferencing have an impact that stretches well beyond vector databases. According to the Futuriom RAG report, here are some of the other ways that RAG will influence the technology markets: Interest in SLMs. Foundational AI models form the basis of RAG, enabling AI applications to understand queries, process retrieved data, and generate responses. For specific industries or applications, LLMs can be stripped down and customized to SLMs, which can operate more economically. Technology companies to watch: Amazon, Anthropic, Hugging Face, OpenAI, Microsoft, and Stability AI. Database Evolution. As mentioned above, RAG is transforming the database and data management market at a very high speed. In addition to Oracle, Snowflake, and Databricks, some companies to watch include DDN, LanceDB, MinIO, MongoDB, Pinecone, VAST Data, Weka, and Vectara. The Model Context Protocol (MCP) is accelerating the maturity of agentic AI. It's an esoteric under-the-covers protocol, but developers have leapt onto MCP to make AI handle sophisticated tasks. Key technology players: Virtually every infrastructure company is employing MCP to enable agentic AI. Security. Security is growing in importance for RAG and also for MCP. Companies will need to allay security concerns with the proper architecture and security tools. Technology companies to watch: Many security companies are focused on AI, but we have recently seen some interesting AI data security launches from companies such as Aryaka, Aviatrix, Cloudflare, Eclypsium, Fortanix, and Teleport. Large AI providers such as Amazon have also released security services for RAG applications, such as Amazon Bedrock Guardrails for sensitive data redaction and Knowledge Bases for managing RAG workflows. Hybrid, edge, and multicloud networking. The same security needs translate to the network. More secure application-layer networking and infrastructure will be needed for inferencing to help connect and feed RAG applications with data. Key networking companies to watch in connecting AI applications: Arista, Aviatrix, Cloudflare, Cisco, F5, Alkira, Aryaka, Aviz Networks, Hedgehog, Juniper Networks (HPE), Palo Alto Networks, and Versa Networks.
Yahoo
23-07-2025
- Business
- Yahoo
Omdia: China Hyperscalers Commercialize AI Amid Export Restrictions but Modern GPUs Remain Limited
LONDON, July 23, 2025--(BUSINESS WIRE)--What are the biggest cloud providers in Asia doing to meet the rising demand for AI inference? Omdia's latest research offers an in-depth look at the evolving challenges of AI inference operations, the key trade-offs between throughput, latency, and support for diverse AI models, and the possible solutions. The report provides detailed coverage of companies such as Huawei, Baidu, Alibaba, ByteDance, Tencent, NAVER, and SK Telecom Enterprise. It examines which GPUs, AI accelerators, and AI-optimized CPUs these companies offer, their pricing, the stockpile of NVIDIA GPUs, their AI service portfolios, and the current status of their own AI models and custom chip projects. Despite heavy stockpiling of NVIDIA H800 and H20 GPUs during 2024 and early 2025, prior to the imposition of US export controls, these high-performance chips are difficult to find in Chinese cloud services, suggesting they are primarily used for the hyperscalers' own model development projects. Similarly, there are relatively few options that use any of the Chinese AI chip projects; exceptions include Baidu's on-premises cloud products and some Huawei Cloud services, although they remain limited. Chinese hyperscale companies are well advanced in adopting best practices such as decoupled prefill and generation and publish seminal research in fundamental AI; however, the research papers often mention that the training runs are carried out using Western GPUs, with a few notable exceptions. "The real triumph in Chinese semiconductors has been CPUs rather than accelerators," says Omdia Principal Analyst and author of the report, Alexander Harrowell. "Chinese Arm-based CPUs are clearly in production at scale and are usually optimized for parallel workloads in a way like Amazon Web Services' Graviton series. Products such as Alibaba's YiTian 710 offer an economically attractive solution for serving the current generation of small AI models such as Alibaba Qwen3 in the enterprise, where the user base is relatively small and workload diversity is high." If modern GPUs are required, the strongest offering Omdia found was the GPU-as-a-service product SK Telecom is building in partnership with Lambda Labs. Omdia observed significant interest in moving Chinese workloads outside the great firewall in hopes of accessing modern GPUs and potentially additional training data. Among other important findings, nearly all companies now offer models-as-a-service platforms that enable fine-tuning and other customizations, making this one of the most common ways for enterprises to access AI capabilities. Chinese hyperscalers are especially interested in supporting AI applications at the edge. For example, ByteDance, offers a pre-packaged solution to monitor restaurant kitchens and report whether chefs are wearing their hats. ABOUT OMDIA Omdia, part of Informa TechTarget, Inc. (Nasdaq: TTGT), is a technology research and advisory group. Our deep knowledge of tech markets grounded in real conversations with industry leaders and hundreds of thousands of data points, make our market intelligence our clients' strategic advantage. From R&D to ROI, we identify the greatest opportunities and move the industry forward. View source version on Contacts Fasiha Khan: Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
12-06-2025
- Business
- Yahoo
Multiverse Computing Raises $215M to Scale Ground-Breaking Technology that Compresses LLMs by up to 95%
Technology breakthrough attracts international investment. Quantum AI leader to turbo charge AI proliferation, reduce power concerns and bring the technology to the edge Multiverse Computing's Founding Team SAN SEBASTIAN, Spain , June 12, 2025 (GLOBE NEWSWIRE) -- Multiverse Computing, the global leader in quantum-inspired AI model compression, has developed CompactifAI, a compression technology capable of reducing the size of LLMs (Large Language Models) by up to 95% while maintaining model performance. Having spent 2024 developing the technology and rolling it out to initial customers, the company today announces a €189 million ($215 million) investment round. The Series B will be led by Bullhound Capital with the support of world-class investors such as HP Tech Ventures, SETT, Forgepoint Capital International, CDP Venture Capital, Santander Climate VC, Quantonation, Toshiba and Capital Riesgo de Euskadi - Grupo SPRI. The company has brought on widespread support for this push with a range of international and strategic investors. The investment will accelerate widespread adoption to address the massive costs prohibiting the roll out of LLMs, revolutionizing the $106 billion AI inference market. LLMs typically run on specialized, cloud-based infrastructure that drives up data center costs. Traditional compression techniques—quantization and pruning—aim to address these challenges, but their resulting models significantly underperform original LLMs. With the development of CompactifAI, Multiverse discovered a new approach. CompactifAI models are highly-compressed versions of leading open source LLMs that retain original accuracy, are 4x-12x faster and yield a 50%-80% reduction in inference costs. These compressed, affordable, energy-efficient models can run on the cloud, on private data centers or—in the case of ultra compressed LLMs—directly on devices such as PCs, phones, cars, drones and even Raspberry Pi. 'The prevailing wisdom is that shrinking LLMs comes at a cost. Multiverse is changing that,' said Enrique Lizaso Olmos, Founder and CEO of Multiverse Computing. 'What started as a breakthrough in model compression quickly proved transformative—unlocking new efficiencies in AI deployment and earning rapid adoption for its ability to radically reduce the hardware requirements for running AI models. With a unique syndicate of expert and strategic global investors on board and Bullhound Capital as lead investor, we can now further advance our laser-focused delivery of compressed AI models that offer outstanding performance with minimal infrastructure.' CompactifAI was created using Tensor Networks, a quantum-inspired approach to simplifying neural networks. Tensor Networks is a specialized field of study pioneered by Román Orús, Co-Founder and Chief Scientific Officer at Multiverse. 'For the first time in history, we are able to profile the inner workings of a neural network to eliminate billions of spurious correlations to truly optimize all sorts of AI models,' said Orús. Compressed versions of top Llama, DeepSeek and Mistral models are available now, with additional models coming soon. Per Roman, Co-founder & Managing Partner, Bullhound Capital, said: 'Multiverse's CompactifAI introduces material changes to AI processing that address the global need for greater efficiency in AI, and their ingenuity is accelerating European sovereignty. Román Orús has convinced us that he and his team of engineers are developing truly world-class solutions in this highly complex and compute intensive field. Enrique Lizaso is the perfect CEO for rapidly expanding the business in a global race for AI dominance. I am also pleased to see that so many high-profile investors such as HP and Forgepoint decided to join the round. We welcome their participation.' Tuan Tran, President of Technology and Innovation, HP Inc., commented: 'At HP, we are dedicated to leading the future of work by providing solutions that drive business growth and enhance professional fulfillment. Our investment in Multiverse Computing supports this ambition. By making AI applications more accessible at the edge, Multiverse's innovative approach has the potential to bring AI benefits of enhanced performance, personalization, privacy and cost efficiency to life for companies of any size.' Damien Henault, Managing Director, Forgepoint Capital International, said: 'The Multiverse team has solved a deeply complex problem with sweeping implications. The company is well-positioned to be a foundational layer of the AI infrastructure stack. Multiverse represents a quantum leap for the global deployment and application of AI models, enabling smarter, cheaper and greener AI. This is only just the beginning of a massive market opportunity.' Multiverse Computing extends its sincere gratitude to its current investors for their continued trust and support, as well as to the European institutions whose backing has been instrumental in achieving this milestone. For more information about Multiverse Computing and CompactifAI, visit About Multiverse Computing Multiverse Computing is the leader in quantum-inspired AI model compression. The company's deep expertise in quantum software and AI led to the development of CompactifAI, a revolutionary AI model compressor. CompactifAI compresses LLMs by up to 95% with only 2-3% precision loss. CompactifAI models reduce computing requirements and unleash new use cases for AI across industries. Multiverse Computing is headquartered in Donostia, Spain, with offices across Europe, the US, and Canada. The company won DigitalEurope's 2024 Future Unicorn award and was recognized by CB Insights as one of the Top 100 Most Promising AI Companies in 2025. With over 160 patents and 100 customers globally, including Iberdrola, Bosch, and the Bank of Canada, Multiverse Computing has raised c.$250M to date. For more information, visit About Bullhound Capital Bullhound Capital is the investment management arm of GP Bullhound, building with founders creating category-leading technology companies. Launched in 2008 with over €1 billion deployed, it has invested in global leaders like Spotify, Klarna, Revolut, Slack, Unity, ConnexAI and EcoVadis. Operating from 13 offices worldwide, its platform delivers hands-on, founder-focused support across strategy, growth, and execution. From quantum to entertainment, Bullhound Capital backs global leaders applying Artificial Intelligence to solve real-world problems. About SETT The Sociedad Española para la Transformación Tecnológica, Entidad Pública Empresarial, SETT, attached to the Ministry for Digital Transformation and Public Function, is a public entity dedicated to the financing and promotion of advanced and transformative digital technologies. The operation is carried out through the Next Tech fund, whose objective is to encourage private investment and improve access to financing in strategic Spanish sectors such as disruptive technologies. The implementation of the Next Tech, foreseen in the Recovery, Transformation and Resilience Plan, is among the functions of SETT, which also manages two other financial instruments to boost the technological business ecosystem: PERTE Chip, dedicated to microelectronics and semiconductors, and Spain Audiovisual Hub, which promotes the digitization of the audiovisual sector. Contact InformationLaunchSquad for Multiverse Computingmultiverse@ A photo accompanying this announcement is available at