Tectonic plates can spread subduction like a contagion — jumping from one oceanic plate to another
Subduction zones, where one tectonic plate dives underneath another, drive the world's most devastating earthquakes and tsunamis. How do these danger zones come to be? A study in Geology presents evidence that subduction can spread like a contagion, jumping from one oceanic plate to another — a hypothesis previously difficult to prove.
This result "is not just speculation," says University of Lisbon geologist João Duarte, who was not involved in the research. "This study builds an argument based on the geological record."
Because subduction drags crust deep into the earth, its beginnings are hard to examine. The new study provides a rare ancient example of potential subduction "infection." Its authors say they've discovered evidence that neighboring collisions triggered East Asia's "Ring of Fire," a colossal subduction system currently fueling earthquakes and volcanoes from Alaska to the southern Indian Ocean.
Nearly 300 million years ago China was a scattering of islands separated by the ancient Tethys and Asian oceans. Established subduction zones consumed these oceans, welding the landmasses into a new continent and raising mountains from Turkey to China. By 260 million years ago this subduction seems to have spread and begun pulling down the neighboring Pacific plate.
"The dying act of those closing oceans may have been to infect the Pacific plate and start it subducting westward under the Asian continent," says study lead author Mark Allen, a geologist at Durham University in England. "In one form or another, it's been diving down ever since."
The smoking gun in this case is the "Dupal anomaly," identified by a geochemical fingerprint from the ancient Tethys Ocean and what is now the Indian Ocean. When the study authors unexpectedly found this signature in volcanic rocks from the western Pacific, they surmised that material from the Tethys had spread eastward across a plate boundary from one subduction zone to another — triggering the neighboring plate's descent. "It's like seeing someone's fingerprint at a crime scene," Allen says.
RELATED STORIES
—Africa is being torn apart by a 'superplume' of hot rock from deep within Earth, study suggests
—Gigantic 'mud waves' buried deep beneath the ocean floor reveal dramatic formation of Atlantic when Africa and South America finally split
—Yosemite's ultra-deep canyon may have been carved in part by a ghost volcano and river, provocative research suggests
But the mechanism of spread remains mysterious. The researchers suspect that transform faults — boundaries where plates slide past one another, like the San Andreas Fault — may act as weak spots where slight changes in collision angle or speed can destabilize dense oceanic crust, causing it to sink. Duarte compares the scenario to aluminum foil in water. "The foil floats," he says, "but the slightest tap will cause it to sink."
If subduction spreads this way, could the Atlantic Ocean's relatively quiet plate margins be next? The massive 1755 Lisbon earthquake hints at early subduction invasion there. Duarte suggests parts of Iberia and the Caribbean are undergoing this process's initial stages: "In another 100 million years a new Atlantic 'Ring of Fire' may form — just as it once did in the Pacific."
This article was first published at Scientific American. © ScientificAmerican.com. All rights reserved. Follow on TikTok and Instagram, X and Facebook.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Epoch Times
2 hours ago
- Epoch Times
China's $167 Billion Tibetan Dam Alarms Scientists, Neighbors
AD China's mega dam project in Tibet faces major safety and environmental concerns, as it is located in an earthquake-prone, biodiverse region. The Yarlung Tsangpo Grand Canyon in Megok county, Nyingchi city, in western Tibet, China, on March 28, 2021. STR/AFP via Getty Images 7/25/2025 | Updated: 7/25/2025 China has officially begun construction on a massive hydroelectric dam on the Yarlung Tsangpo River in Tibet, despite strong warnings from scientists about the region's fragile ecosystem and complex geological conditions. Olivia Li is a contributor to The Epoch Times with a focus on China-related topics since 2012. Author's Selected Articles


Forbes
a day ago
- Forbes
AI Training Gets 10x Faster, 95% Cheaper With Decentralized Strategy
A quiet shift in the foundations of artificial intelligence (AI) may be underway, and it is not happening in a hyperscale data center. 0G Labs, the first decentralized AI protocol (AIP), in collaboration with China Mobile, recently announced a technical breakthrough that could have sweeping implications for how businesses access and deploy large language models. Their innovation is a new method of training massive AI models with over 100 billion parameters, without needing the ultra-high-speed internet or expensive centralized infrastructure typically required. At first glance, this might sound like a win for the engineering world. But the real story is economic and strategic. What 0G Labs has achieved could lower the cost of building AI, put more control back into the hands of enterprises, and open the door for new players to enter the space. What It Means For AI Training To understand the shift, it helps to revisit how large-scale AI models are currently trained. Models like OpenAI's GPT-4 or Anthropic's Claude require vast computing power and network throughput. Traditionally, this means training them on powerful GPUs connected across high-speed, centralized data centers owned or rented from companies like Amazon Web Services, Google Cloud, or Microsoft Azure. As of early 2025, OpenAI's leadership, including Sam Altman, publicly stated that training GPT‑4 cost over $100 million. This is supported both by official statements and multiple cost models in recent AI analysis reportsIt is a model that demands capital, talent, and infrastructure that few organizations can afford. 0G Labs Is Challenging That Assumption For AI Training Their newly published framework, called DiLoCoX, introduces a low-communication training method that dramatically reduces the need for high-bandwidth connectivity. In practical terms, they successfully trained a 107 billion parameter model on a 1 Gbps network using decentralized clusters. This record is a 10x improvement of the previous record and the 300x speed-up breakthrough that made this possible for the first time. This is roughly the bandwidth of a typical office internet connection. Instead of building everything in one giant compute center, their approach links together smaller, distributed machines and optimizes how information is shared between them. The result is a highly scalable, cost-efficient way to train massive models outside the traditional cloud. In speaking with 0G labs founder and CEO Michael Heinrich, he said 'DiLoCoX marks a pivotal step in democratizing LLM training: bridging the gap between massive foundation models and decentralized clusters connected by slow, unreliable networks. By combining pipeline parallelism, delay‑tolerant communication overlap, and adaptive gradient compression, the framework delivers scale and speed previously thought exclusive to high‑bandwidth data centers. This will usher in a new era where large‑scale AI training is no longer tethered to centralized infrastructure.' Why Does AI Training Matter for Business At a time when every enterprise is under pressure to do more with AI, infrastructure is quickly becoming the bottleneck. Some businesses are starting to look at decentralized AI by design. Building large models remains expensive, exclusive, and largely confined to companies with deep resources or strategic cloud partnerships. 0G's breakthrough opens up a third path. This is not just a story of cost savings. It is a story of optionality and control. 1. Lowering the Barrier to Entry DiLoCoX's approach reduces the infrastructure by up to 95% required to participate in the LLM race. For startups, this means the ability to experiment and scale without burning through venture capital on GPU spend. For mid-sized enterprises, it offers the possibility of training models in-house without making large cloud commitments. For governments and research labs, it means more accessible and sovereign development of AI capabilities. 2. Strategic Independence from Hyperscalers Most AI training today depends on three cloud providers. That concentration carries risk in terms of cost escalation, vendor lock-in, and compliance. If your business depends on AI but also operates in a sensitive sector like healthcare, defense, or finance, the ability to train or fine-tune models independently becomes a powerful strategic lever. Decentralized AI offers a route toward digital autonomy. By breaking the assumption that cutting-edge AI must be trained inside centralized cloud platforms, 0G's model creates new room for competition and for innovation. 3. Aligning with Data Privacy and Compliance Needs Many companies are cautious about uploading proprietary data to cloud-based models or training environments. With decentralized training, it becomes possible to keep data local within jurisdiction, within the firewall, or even on edge devices while still participating in large-scale AI development. This is particularly attractive in regions with strict data sovereignty laws such as the European Union or countries building their own AI ecosystems. The 0G network never sees any of the private data 4. Accelerating Innovation in Underserved Markets The high cost of entry has kept many countries and industries on the sidelines of advanced AI development. DiLoCoX lowers that threshold. A university in Kenya, a telecom provider in Southeast Asia, or a regional bank in Latin America may not have access to the same compute as Silicon Valley, but they may soon have the tools to train and deploy their intelligent systems on existing infrastructure. 5. Geopolitical and Regulatory Risks While the technical achievement is impressive, the involvement of China Mobile raises questions. As tensions between the United States and China continue to escalate over technology leadership and national security, businesses must weigh the potential regulatory scrutiny, data governance concerns, and reputational risks associated with partnerships involving Chinese state-affiliated entities. For companies based in the United States or operating in allied markets, any integration of infrastructure or research tied to China could face export controls, legal restrictions, or public backlash. Organizations exploring decentralized AI solutions will need to consider not just performance and cost, but also political alignment, compliance frameworks, and long-term viability. However, having DiLoCoX on a decentralized infrastructure where the network is trustless, this is not a concern because China Mobile never sees your data, and the system doesn't rely on them for results. Reframing the Business Model of AI If DiLoCoX is widely adopted, it could create ripple effects across the broader AI ecosystem. Cloud revenue models, currently boosted by AI workloads, could face new pricing pressure. AI-as-a-service platforms may need to re-architect to support hybrid or decentralized deployments. Open-source frameworks might grow in influence as decentralization emphasizes interoperability and local control. Enterprise software vendors may need to rethink their AI strategies to reflect a more distributed compute landscape. This shift also aligns with the broader trend of AI for everyone. From low-code agent builders to edge-based inferencing, the movement is toward more accessible, modular, and customizable AI stacks. Decentralized training is the natural extension of that philosophy. An AI Signal for CIOs and CTOs For enterprise leaders, 0G's work serves as a signal not of immediate disruption, but of near-future opportunity. AI is evolving from its critical beginning. Now is the time to reevaluate infrastructure strategy. Should your organization continue investing in cloud-based model hosting, or begin exploring decentralized alternatives? Could your internal data center serve as a node in a distributed training system? Decentralized federated learning is a great way of tapping into private data from different parties on a network, like hospitals training a cancer diagnostic model. Might you partner with others in your sector to co-develop models using decentralized protocols? Even if the answer is not yes today, the emergence of frameworks like DiLoCoX should push AI infrastructure planning higher on the strategic agenda. Businesses that prepare for this shift by building internal capacity, evaluating partners, and understanding the technical stack will be best positioned to move when the economics tip in their favor. A Future Where AI is Built Differently What 0G Labs and China Mobile have demonstrated is more than just a technical proof of concept. It is a new way of thinking about how intelligence is built, trained, and distributed. By showing that it is possible to train 100 billion parameter models without centralized supercomputers, they are not just pushing the boundaries of scale. They are expanding access. For business, that means AI may soon be less about who owns the biggest data center and more about who can build the smartest systems with the most flexibility. That is an AI future worth preparing for.


Bloomberg
a day ago
- Bloomberg
China Deploys Cannibal Mosquitoes to Battle Chikungunya Surge
Scientists battling China's largest chikungunya epidemic are releasing swarms of giant cannibal mosquitoes whose larvae devour the smaller insects spreading the debilitating disease. These so-called 'elephant mosquitoes,' measuring nearly 2 centimeters (0.8 inches) in length, are voracious predators. Public health experts hope these unusual allies will become unlikely heroes in stopping the painful, fast-moving virus in the southern province of Guangdong.