Latest news with #Aethir

Associated Press
15-05-2025
- Business
- Associated Press
Polyhedra and Aethir Launch Joint Incubator to Accelerate AI Applications With Verifiable Infrastructure
The Program Will Provide Grants to Help Verifiable AI Startups Address Development Bottlenecks SAN FRANCISCO, May 15, 2025 /PRNewswire/ -- Polyhedra, the zero-knowledge infrastructure company behind EXPchain, and Aethir, the leading provider of decentralized GPU cloud compute, today announced the launch of a joint incubator program designed to fast-track the development of verifiable AI applications. Combining scalable compute access with transparent infrastructure, the initiative aims to support early-stage teams building production-ready AI applications across enterprise, research, and open-source domains. As AI development accelerates, the incubator's startup activity has surged, with 20 new AI companies joining the global unicorn list since 2023. In 2024, global AI-related startup funding soared to $110 billion, marking a 62% year-over-year increase. Despite this influx of capital, many startups continue to face persistent bottlenecks: limited access to compute and a lack of verifiable infrastructure for model training and deployment. Polyhedra and Aethir's joint incubator is designed to address both challenges head-on, offering a structured, end-to-end pathway from proof of concept to production. Projects begin with zero-knowledge toolkits and on-chain verification infrastructure from Polyhedra, then scale using Aethir's decentralized GPU cloud for inference, fine-tuning, and launch. Within this unified environment, startups can build, test, and deploy within the same infrastructure they'll operate on long-term, eliminating common scaling hurdles while accelerating progress on high-integrity use cases where compliance, trust, and model integrity matter most. 'AI developers face real challenges today in accessing scalable compute and reliable infrastructure,' said Bart Meyer, chief ecosystem officer of Polyhedra. 'By joining forces with Aethir, we're offering a full-stack solution, from high-performance GPU compute to verifiable AI tooling, so teams can go from idea to enterprise-ready frictionlessly.' Participants will receive non-dilutive grants from Polyhedra ranging from $10,000 to $100,000, alongside technical integration and developer tooling on EXPchain, a Layer 1 chain purpose-built for AI-native applications and verifiable compute. Additionally, through Aethir's $100 million Ecosystem Fund, teams will gain subsidized access to GPU compute credits valued at up to $100,000 per team, allowing developers to scale AI workloads in testnet or mainnet environments without centralized dependencies. 'Our mission has always been to decentralize and democratize access to compute,' said Mark Rydon, CSO of Aethir. 'Partnering with Polyhedra brings a crucial layer of verifiability and interoperability to the Aethir ecosystem, where together, we're enabling the future of provable AI.' Applications are now open and can be submitted through each partner's website. Submissions will undergo a multi-step selection process, including an initial assessment, interviews, a joint grant proposal, and final approval. Evaluation will focus on technical feasibility, infrastructure requirements — particularly for compute and zero-knowledge — and alignment with ecosystem goals. For more information, visit About Polyhedra Polyhedra is building foundational infrastructure for trust and scalability in AI and blockchain systems, enabling secure, verifiable, and high-performance applications. Led by a world-class team of engineers, researchers, and business leaders from institutions including UC Berkeley, Stanford, and Tsinghua University, Polyhedra's deep expertise in zero-knowledge proofs and distributed systems underpins the development of our technical solutions that will help form the AI infrastructure of the future. About Aethir Aethir is the world's largest decentralized cloud GPU network, with over 425,000 enterprise-grade GPUs across 95 countries. The platform provides scalable, cost-efficient cloud GPU services for AI, machine learning, gaming, and rendering applications, connecting users to distributed resources through a Decentralized Physical Infrastructure Network (DePIN). By making high-performance computing more accessible and efficient, Aethir is transforming how AI and enterprise applications are powered. To learn more, visit Press & Media Inquiries: Polyhedra Gordon Evans [email protected] 651-262-7862 Aethir [email protected] View original content to download multimedia: SOURCE Polyhedra


Forbes
24-03-2025
- Business
- Forbes
DeepSeek And ASI-1 Mini: A Closer Look At AI Computing Optimization
AI is advancing faster than ever—that much is clear. But what's often overlooked is the knock on effect on computing power, which is struggling to keep up with demand. With models like DeepSeek and ASI-Mini 1 introducing smarter architectures, it might seem like we're on the verge of a solution. Yet, this opens up a bigger question—are we solving the compute crisis, or are we actually accelerating it? The common denominator between DeepSeek and ASI-Mini 1 is their use of Mixture of Experts (MoE)—an architecture incorporating multiple expert sub-models. Rather than engaging the entire model for every request, MoE selectively activates specialised expert models, reducing computational strain while maintaining performance. This approach enhances compute efficiency, scalability, and specialisation, making AI systems more adaptable and resource-conscious. This breakthrough has highlighted the growing importance of MoE in AI development. While both models employ MoE, ASI-Mini 1, built by takes this even further by incorporating Mixture of Agents (MoA) or Mixture of Models (MoM). As an example, MoA allows multiple autonomous AI agents to collaborate, optimising resource use and making AI more adaptable. Not only excelling in expansion, but also becoming the world's first Web3 large language model. Optimised compute usage should, in theory, reduce overall computing demand. However, it's not that simple. Jevons Paradox suggests that efficiency gains often lead to greater adoption, ultimately driving demand even higher. DeepSeek's ability to deliver high-performance AI at lower costs is a prime example—by making AI more accessible, it fuels greater investment in AI projects, intensifying the need for infrastructure. As a result, the focus shifts toward ensuring solutions are not only cost-efficient, but also scalable and adaptable to sustain AI's rapid growth. Both LLMs and AI Agents are intensifying this demand, requiring substantial computing power for training, inference, and real-time decision-making. LLMs, particularly the latest iterations with billions of parameters are computationally expensive not just during training but also where they process massive datasets and in inference, where generating responses at scale remains resource-intensive. AI Agents, operating in dynamic environments, introduce continuous workloads, constantly analysing incoming data and making autonomous decisions in real time. This sustained computational demand places additional strain on infrastructure, requiring consistent access to high-performance compute resources. As highlighted in Aethir's analysis, GPUs remain the foundation of AI infrastructure, yet their high costs, supply chain constraints, and availability pose significant challenges for businesses scaling AI operations. This surge in AI adoption makes high-performance, cost-efficient, and scalable infrastructure an imperative, particularly as businesses seek flexible, transparent, and globally distributed compute solutions to maintain a competitive edge. The market isn't just seeing incremental advancements. What we're experiencing is an infrastructural shift, where companies must rethink how they build, deploy, and sustain AI systems. That's the new status quo. One of the biggest shifts is the broadening of AI applications which are no longer limited to research labs or enterprise automation, AI is embedding itself into consumer products, financial systems, and real-time decision-making engines. AI agents, once a niche concept, are now being deployed in autonomous trading, customer interactions, creative fields, and decentralised networks, all of which require constant, real-time compute power. At the same time, we're witnessing an evolution in how AI infrastructure is funded and scaled. SingularityNET's $53M investment in AI infrastructure reflects a broader trend: businesses aren't just developing better models—they're strategising around compute access itself. The scarcity of GPUs, the need for decentralised compute solutions, and the rising costs of cloud AI infrastructure are becoming as critical as AI model improvements themselves. But, how will companies sustain this level of growth? Even with MoE and its extensions reducing computational inefficiencies, the demand isn't shrinking—it's accelerating. Companies that once focused solely on AI capabilities now must navigate compute economics just as carefully. Those who fail to plan for infrastructure growth risk being left behind.