logo
#

Latest news with #Macrocosmos

Swarm Intelligence Is Reshaping How AI Gets Trained
Swarm Intelligence Is Reshaping How AI Gets Trained

Forbes

time2 days ago

  • Business
  • Forbes

Swarm Intelligence Is Reshaping How AI Gets Trained

A decentralized AI training swarm could be more cost effective, equitable and inclusive that current ... More closed AI training approaches. It's no secret that current AI models are built behind closed doors in secrecy and seclusion. Only a handful of Big Tech companies hold the keys to those doors, the massive server centers, petabytes of data, training pipelines and protocols. The artificial intelligence models they produce are locked away behind their self-serving black boxes of seeming techno wizardry that the public can query but never really understand, influence or change. A ballsy decentralized AI start-up called Macrocosmos wants to change that. With the relaunch of Subnet 9 on the Bittensor network — think of a subnet as a mobile phone app and Bittensor as an app store — Macrocosmos is taking the first meaningful step toward a more democratic future for artificial intelligence. The clue to how they've achieved this feat is captured in the subnet's new acronym, IOTA: Incentivized Orchestrated Training Architecture. This construct allows anyone with a graphic processing unit, no matter how modest, to help train cutting edge AI models. Based on a novel 'swarm" approach, which is a theoretical pre training strategy for AI, Macrocosmos' breakthrough resolves key challenges around data and model compression, as explained in their white paper published on Friday. At its core is a vision that reimagines how intelligence is built and who gets to participate in that process. 'We are single-minded and obsessed in our pursuit of building competitive decentralized technologies that can compete with centralized labs,' wrote Macrocosmos CTO Steffen Cruz in a post on X. Before we can understand swarm training, we need to understand the key differences between traditional AI and decentralized AI. At its simplest, decentralized AI means that the training of an AI model doesn't happen in one place or under the control of one company. Instead, it's spread out, distributed — across homes, labs, campuses and servers anywhere in the world. The same way that Bitcoin decentralized money away from centralized banks, Bittensor and Macrocosmos aim to democratize intelligence itself. This matters because AI is infiltrating more and more of our lives. It's deciding what news we see, what products we are offered, how we shop, how we interact with each other, how we work and even how we're hired. Concentrating that power within a few cabalistic computing systems risks not just privacy or fairness, but the future of innovation itself. By opening those locked doors to public participation and direct engagement, decentralized AI offers a new kind of alignment — one where users are also co-creators. 'Not only is this a new research endeavor for Macrocosmos and Bittensor, but it's something bigger and more personal to us,' Cruz added. 'We are scientists, researchers and developers.' Swarm training, as deployed by Macrocosmos through IOTA, takes cues from the natural world. Similar to how a swarm of bees, school of fish or a flock of birds can accomplish complex navigation without central control, this novel subnet enables thousands of independent machines to orchestrate and collaborate on training a single massive AI model. Instead of forcing each network participant to download and run the full model — a costly and impractical ask — Macrocosmos uses a technique called model parallelism. Each subnet member — also called a miner since their actions 'mine' actual monetary incentives that benefit the entire network — trains just one slice of the model, typically a few layers of the neural network. As data flows through those individual layers, each miner processes their portion and passes the output forward. Then, a lightning fast reverse review grades how far off the model was and adjusts miner payouts accordingly. This approach isn't just efficient than centralized methods — it's more inclusive. Rather than requiring top-tier hardware, the architecture allows both low- and high-compute participants to contribute meaningfully. This breaks down the barriers to entry that have long kept open-source communities at the margins of AI model training. To understand the difference between how traditional AI models are trained and what Macrocosmos is doing, this graphic offers a useful side-by-side comparison: This side-by-side comparison depicts the differences between centralized and decentralized AI ... More training. In centralized training, one model is split into layers that are tightly linked across GPUs within a single data center. Everything is optimized for high-speed local connections. But this setup is expensive, exclusive and closed. In contrast, the decentralized swarm training distributes different layers of the model across a global network of contributors or miners. Each of these individuals handles a piece of the workload and communicates their results with others. The swarm system regularly syncs up all the parts into a single, shared model. Instead of requiring giant compute clusters, it leverages a far flung spectrum of connected devices — ranging from a personal desktop GPU to larger industrial setups. The outcome? Lower costs, more transparency and an AI model built by the many, not the few. However, training models this way has its challenges. Internet bandwidth is a lot slower than the fiber optics inside a data center. And decentralized participants can drop out, try to cheat the incentive system or go offline at a moment's notice without warning. While some of those issues are beyond Macrocosmos' control, they have developed an elegant solution for potential problems tied to miner incentives and rewards. The design of its new IOTA network overcomes three big challenges: In this video clip, the firm's co-founders Cruz and Will Squires discuss why decentralized training matters, and how it can open a new era for AI. This is more than a technical upgrade — it's a seismic philosophical shift. For years, decentralized AI projects have relied on centralized training behind the scenes. Macrocosmos is finally changing that. 'The time has come for us to move forward as a community and tackle new challenges in model training. This is an imperative for Bittensor. The competition are at our heels,' Cruz added. 'We beat nation states, we tirelessly benchmarked our progress and we shared our findings in our white paper. It was a fantastic experiment, and we pushed it far beyond its original design.' This effort to distribute AI's compute and ownership to all comers through swarm training enables a future reality where AI isn't something reserved for elite power brokers while inferential dregs are grudgingly dripped to the masses. It's a collective thing we build together. Macrocosmos is taking decentralized training out of Big Tech's locked, walled garden and into the wild. If they're successful, the next breakthrough frontier AI model might not come from OpenAI, Google or Meta — but rather from a swarm of us.

This Decentralized AI Could Revolutionize Drug Development
This Decentralized AI Could Revolutionize Drug Development

Forbes

time14-05-2025

  • Science
  • Forbes

This Decentralized AI Could Revolutionize Drug Development

One of the most promising advancements in drug discovery isn't coming from big pharma — it's emerging from the convergence of decentralized AI and high-fidelity molecular simulations. That basically means creating faux chemical reactions on a computer while precisely measuring the results at the levels of atoms. In April, Rowan Labs released Egret-1, a suite of machine-learned neural network potentials designed to simulate organic chemistry at atomic precision. In plain terms, this model offers 'the level of accuracy from national supercomputers at a thousand to a million times the speed,' Rowan Labs Co-founder Ari Wagen said on Zoom. And they've open-sourced the entire package. But the real acceleration comes from Rowan's partnership with subnet 25 of the decentralized AI protocol Bittensor, called Macrocosmos. It's an unlikely yet potent collaboration — Rowan's high-accuracy synthetic data generation, now powered by a decentralized compute layer, could drastically reduce the cost and time to discover new therapeutic compounds and treatments. At the heart of Rowan's work is the idea of training AI neural networks not on scraped web data, but on physics in action — specifically, quantum mechanics. 'We build synthetic datasets by running quantum mechanics equations,' Wagen explained. 'We're training neural networks to recreate the outputs of those equations. It's like Unreal Engine [a leading 3D modeling app], but for simulating the atomic-level real world.' This isn't theory. It's application. Rowan's models can already predict critical pharmacological properties — like how tightly a small molecule binds to a protein. That matters when trying to determine if a potential drug compound will actually work. 'Instead of running experiments, you can run simulations in the computer,' Wagen said. 'You save so much time, so much money and you get better results.' To generate the training data for these models, Rowan used conventional quantum mechanical simulations. But to go further — to make the models more generalizable and robust — they need more data. That's where Macrocosmos comes in. 'We've spent the past year trying to incentivize better molecular dynamics,' said Macrocosmos' Founding Engineer, Brian McCrindle. 'The vision is to let Rowan spin up synthetic data generation across our decentralized compute layer — at fractions of the cost of AWS or centralized infrastructure.' The advantage isn't just cost — it's scale, speed and resilience. 'If we can generate the next training dataset in a month instead of six, the next version of Egret will come out twice as fast,' McCrindle added. The stakes are enormous. With the right volume and variety of high-quality data, Rowan hopes to build 'a model of unprecedented scale that can simulate chemistry and biology at the atomic level,' Wagen said. That's not hyperbole — it's a strategy to compress the drug discovery timeline by years and open the door to faster cures for rare diseases and more effective preclinical toxicity testing. And it doesn't stop at human health. Rowan is already working with researchers tackling carbon capture, atomic-level manufacturing and even oil spill cleanup using this technology. 'We can predict how fast materials break down, or optimize catalysts to degrade pollutants,' said Rowan Co-founder, Jonathon Vandezande, a materials scientist by training. Of course, synthetic data raises the question of reliability. Wagen was clear: 'The synthetic data we generate is more accurate than what you'd get from running a physical experiment. Real instruments have worse error bars than our quantum mechanical approximations.' And unlike earlier failures like IBM Watson Health, Rowan posts all model benchmarks publicly. 'You can see exactly where they perform well—and where they don't,' he said. So what's next? Within a year, both teams aim to release a new peer-reviewed paper demonstrating how decentralized compute generated the next generation of chemical simulation models. 'This partnership lets us take what would have been a six-figure cloud bill and decentralize it,' McCrindle noted. 'That's the promise of decentralized science.' It's also a compelling proof point for Bittensor, which now supports over 100 subnets tackling everything from international soccer match predictions to AI deepfake detection. But for McCrindle, the vision is simpler: 'Can we incentivize any kind of science? That's always been the question.' With Egret-1 and Macrocosmos' decentralized AI platform — the answer looks increasingly like a yes.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store