logo
India is splitting in two!!! Geologists sound alarm over hidden tectonic upheaval

India is splitting in two!!! Geologists sound alarm over hidden tectonic upheaval

Time of India23-04-2025
In a groundbreaking discovery that could rewrite our understanding of Earth's inner dynamics, geologists have revealed that the
Indian Plate
, the massive slab of Earth's crust carrying the subcontinent, is splitting in two.
Tired of too many ads? go ad free now
A part of it is peeling away and sinking deep into the Earth's mantle, a process known as
delamination
. This hidden and previously undetected geological activity could have far-reaching consequences, not just for India but for the entire planet. It may alter
patterns, reshape landscapes, and challenge long-standing scientific theories about plate tectonics. The findings have stunned experts and sparked urgent calls for deeper research into Earth's shifting crust.
How is this shift happening
The Indian Plate has long been a key player in one of the world's most dramatic geological collisions, the crash into the Eurasian Plate that formed the Himalayas. But now, scientists have found something even more astonishing beneath its surface.
Using advanced seismic analysis and helium isotope tracking in the springs of Tibet, researchers have uncovered evidence of delamination, a rare process where the dense lower part of a
tectonic plate
peels away and sinks into the Earth's mantle.
This means the Indian Plate is effectively tearing apart, creating a massive vertical rift deep underground.
'We didn't know continents could behave this way,' said Douwe van Hinsbergen, a geodynamicist at Utrecht University. 'This changes some of our most fundamental assumptions about solid earth science.'
Earthquake hotspots may get hotter
The discovery has serious implications for earthquake risk in the Himalayan region, already one of the most seismically active zones on Earth.
Tired of too many ads? go ad free now
According to Stanford geophysicist Simon Klemperer, the tearing and sinking of the plate could create new stress points in the Earth's crust, triggering more frequent and potentially more powerful quakes.
One major concern is the Cona-Sangri Rift in the Tibetan Plateau, a deep fracture that could be directly tied to the ongoing delamination. If this connection is confirmed, regions along this rift could face heightened seismic danger in the years ahead.
A discovery that shifts the scientific ground
The study, published in the American Geophysical Union, not only reveals the Indian Plate's splitting but also suggests that other continental plates might be undergoing similar processes. Scientists are now scanning regions worldwide for signs of comparable plate behaviour, a move that could revolutionise how we understand everything from mountain formation to plate tectonics itself.
'This could be a missing piece in our puzzle of how continents evolve and interact,' said Fabio Capitanio, a geodynamicist at Monash University, who cautions that the findings are still early-stage.
'It's just a snapshot, and much more data is needed to understand the full picture.'
What will be the effect of this shift on Earth sciences
If confirmed, this discovery could explain long-standing mysteries about how and why certain mountain ranges form, and even help scientists make better predictions about future earthquakes and geological hazards. More importantly, it opens a new frontier in Earth science, one that challenges old models and demands a fresh look at how our planet works.
For now, scientists continue to monitor seismic waves and chemical signatures in the region, hoping to unravel the evolving story of a continent in motion and the silent, subterranean split that could shake the world.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI hallucinations: Building trust in age of confident lies
AI hallucinations: Building trust in age of confident lies

Hans India

time20 hours ago

  • Hans India

AI hallucinations: Building trust in age of confident lies

Artificial intelligence has reached levels of sophistication that were once unthinkable - excelling at problem-solving, language generation, and even creative design. Yet beneath these advances lies a challenge that refuses to fade: AI hallucinations. These aren't minor typos or harmless quirks - they're an inherent side-effect of how today's large language models work, and they demand serious attention as AI moves deeper into high-stakes domains. The question isn't whether AI will continue to hallucinate - evidence suggests it's intrinsic to current architectures. The real question is how quickly we can build the oversight and validation systems needed to enjoy AI's benefits while minimizing the risks of confident, convincing falsehoods. As artificial intelligence transforms industries from healthcare to finance, understanding and addressing the hallucination phenomenon becomes critical for maintaining public trust and ensuring beneficial outcomes. This represents more than a technical challenge - it's a fundamental question about how we integrate powerful but imperfect tools into systems that affect real lives. The opportunity before us is clear: proactive governance and robust validation frameworks can unlock AI's transformative potential while protecting against its inherent limitations. Understanding the Hallucination AI hallucinations occur when a model produces information that is false or fabricated but delivered with apparent confidence. Unlike human hallucinations, which involve perception, AI's fabrications are statistical guesses - generated when the model fills gaps in knowledge with patterns from its training data. The scope of this challenge is more extensive than many realize. In legal testing, Stanford researchers found that some AI legal tools hallucinated between 58 per cent and 82 per cent of the time when answering case-law queries - often inventing plausible-sounding citations. This isn't an occasional error; it's systematic unreliability in one of society's most precision-dependent fields. Perhaps most concerning is how confidently AI systems present false information. Research on 'high-certainty hallucinations' demonstrates that models can express extreme confidence while providing incorrect information, with studies showing AI systems maintain high certainty scores even when generating fabricated content. This creates a particularly dangerous dynamic where AI systems deliver false information with the same confidence level as accurate responses, making it difficult for users to distinguish between reliable and unreliable outputs. Why AI Creates False Realities The causes of AI hallucinations are embedded deep within the technology itself, making them challenging to eliminate through simple fixes or updates. Understanding these root causes is essential for developing effective mitigation strategies. Among the root causes, 'data gaps and bias' represent the most fundamental challenge. If the training data is incomplete or skewed, the model will fill in with similar patterns, which can introduce fabricated details. AI systems learn from vast datasets that inevitably contain inaccuracies, contradictions, and biases. When faced with queries that require information not well-represented in their training data, these systems extrapolate from similar but ultimately irrelevant examples, creating convincing but false responses. While architectural limitations create additional vulnerabilities, domain mismatch amplifies problems when general-purpose models encounter specialized contexts. Further, reasoning complexity also creates an unexpected paradox in AI development. Real-world consequences across critical sectors The risks of AI hallucinations extend far beyond academic exercises, creating tangible consequences across sectors that form the backbone of modern society. Legal systems face unprecedented challenges as AI-generated fake legal citations appear in filings, sometimes going unnoticed until late in proceedings. Healthcare applications present life-and-death implications when AI systems hallucinate medical information. Hallucinations in medical contexts can be dangerous, leading to incorrect diagnoses, inappropriate treatment recommendations, or false assurances about drug interactions. However, research into frameworks like CHECK, which grounds AI in clinical databases, offers hope - baseline hallucinations dropped to 0.3 per cent with structured retrieval systems. Further, business operations face direct financial and reputational consequences from AI hallucinations. How Industry Is Responding While exact spending figures remain proprietary, hallucination reduction is consistently cited among the top priorities for major AI labs. The industry's response reflects both the urgency of the challenge and the complexity of potential solutions. Retrieval-Augmented Generation (RAG) has emerged as one of the most promising approaches to check hallucinations while specialized domain datasets represent another critical intervention. Using vetted, structured, and diverse data to minimize bias and fill gaps helps create more reliable AI systems. Medical AI trained on carefully curated clinical data shows markedly lower hallucination rates than general-purpose systems, suggesting that domain-specific approaches can achieve higher reliability standards. Human-in-the-loop validation and reasoning verification layers are some of the other approaches. Paradox of Progress One of the most counterintuitive findings in recent AI research is that as reasoning ability improves, hallucination risk can also rise. Multi-step reasoning chains introduce more chances for errors to propagate, which explains why some cutting-edge reasoning models have higher hallucination rates than their predecessors. This paradox highlights a fundamental tension in AI development: capability and reliability don't always improve in sync. Advanced models that can solve complex mathematical problems or engage in sophisticated analysis may simultaneously be more prone to fabricating information in areas outside their expertise. This disconnect between capability and reliability makes robust safeguards essential, particularly as AI systems take on increasingly complex tasks. Building Trust Through Transparency Total elimination of hallucinations may be impossible given current AI architectures, so the focus must shift toward transparency and appropriate risk management. This approach acknowledges AI's limitations while maximizing its benefits through careful deployment and clear communication. The risk of hallucinations can be minimized through transparency initiatives, confidence scoring systems and domain-appropriate deployment. Responsible Deployment and Global Cooperation AI hallucinations represent a global challenge similar to cybersecurity or internet governance, demanding cross-border cooperation and coordinated responses. The interconnected nature of AI development and deployment means that solutions developed in one country can benefit users worldwide, while failures can have international implications. International collaboration can accelerate progress through shared datasets of known hallucinations, international evaluation standards, and collaborative research and development. Different countries' experiences with AI regulation and deployment provide valuable learning opportunities for developing effective approaches to hallucination mitigation. Effective changes in the AI education system and robust policy frameworks also hold the key. The Call to Action If we act now by investing in transparency, validation, and collaborative governance, we can ensure AI becomes a trustworthy partner rather than an unreliable narrator. The aim is not perfection, but partnership: pairing AI's scale and speed with human judgment to unlock potential while protecting against its flaws. This represents a critical moment in AI development. The choices made today about how to address hallucinations will shape public trust in AI for decades to come. By acknowledging these challenges honestly and implementing robust safeguards, we can build a future where AI enhances human capabilities without compromising truth and accuracy. The opportunity before us extends beyond solving a technical problem to creating a new model for human-machine collaboration. When AI systems acknowledge their limitations and humans provide appropriate oversight, the combination can achieve results that neither could accomplish alone. The choice is ours - proactive safeguards today, or costly corrections tomorrow. The time for action is now, while we still have the opportunity to shape AI's trajectory toward greater reliability and trustworthiness. That's the foundation for artificial intelligence that truly serves humanity's highest aspirations while respecting our fundamental need for truth and accuracy in an increasingly complex world. (Krishna Kumar is a Technology Explorer & Strategist based in Austin, Texas, USA. Rakshitha Reddy is AI Engineer based in Atlanta, Georgia, USA)

Code Green
Code Green

New Indian Express

timea day ago

  • New Indian Express

Code Green

It was burning bright. For three months, a four-year-old tiger roamed across 12 villages in Lucknow's Rehmankheda area, killing 25 animals and keeping residents on edge in the forest of the night. Daily life slowed as people stayed indoors, wary of the elusive predator that was a ghost with stripes. To track it down, forest officials took a blended approach—mixing traditional tracking methods with modern technology. They installed AI-powered thermal cameras at five key points and deployed three thermal drones to scan the forest canopy. On the ground, trained elephants Diana and Sulochana moved through dense undergrowth where vehicles couldn't go. Meanwhile, a wildlife expert in Bengaluru monitored live camera feeds, studying the tiger's patterns to anticipate its movements. In March, came the breakthrough. AI cameras captured the tiger returning to a fresh kill. A ranger team was dispatched. A tranquiliser dart was fired, but the tiger fled, covering 500 metres before disappearing into thick foliage. Drones followed it from above, helping rangers close in for a second shot. Within 15 minutes, the animal was safely sedated. The 230 kg beast was then caged and transported to the Bakshi Ka Talab range office. The entire operation ended without a single human injury, thanks to the combined effort of AI surveillance, aerial tracking, and coordinated fieldwork. In the past, conserving wildlife in India often meant navigating dense jungles with binoculars, spending months waiting for elusive animals to appear, or diving into the sea with nothing more than a net. Today, conservationists are adding something new to their toolkit: algorithms, thermal cameras, drones, and even genetic samplers. From the cold, high-altitude deserts of Ladakh to the lush mangroves of the Sundarbans, across coral reefs, tiger corridors, and railway tracks, a quiet revolution is unfolding. Technology is changing not only how we protect wildlife, but how we understand it. In Ladakh, where the air is thin and snow leopards are more myth than mammal to most, a team of researchers set out to count the uncountable. 'Tough terrain and a lack of transport facilities were major challenges,' recalls Pankaj Raina from the Department of Wildlife Protection, Leh. 'We carried rations and equipment on ponies and set up temporary camps at subzero temperatures. Some places can only be accessed in winter, when the streams freeze. So, we'd place cameras one winter and return the next to collect them.' Over two years, they trekked more than 6,000 km and installed 956 camera traps across India's largest snow leopard habitat. But their real challenge began only after they returned with nearly half a million images. No human team could sort through that volume of footage manually. So they turned to AI. A system called CaTRAT, trained to recognise Himalayan wildlife, scanned each frame to identify species. But something more precise was required. A second programme was deployed, this one trained to analyse forehead patterns, which are more reliable. 'Only the clearest image from each sequence was used,' explains Raina. 'These were digitised and processed through AI software that scored pattern similarities, creating a photographic library of each individual snow leopard.' The study, published in PLOS One earlier this year, revealed a hopeful truth: snow leopards in Ladakh are thriving. And for the first time, India now has a national photo library of snow leopards—a visual archive that will enable researchers to monitor individual animals. Far to the south, in the forested corridor between Walayar and Madukkarai in Tamil Nadu, a different crisis was unfolding. Since 2008, 11 elephants had died in train collisions along a single seven-km-stretch of track. In 2024, the Coimbatore Forest Division responded by installing an AI-powered thermal surveillance system. The setup involved cameras that detect heat signatures in real-time, capable of spotting large mammals even in pitch darkness or heavy rain. The moment an elephant is detected near the tracks, the system sends instant alerts to train operators and forest teams. In its very first year, the system generated over 5,000 alerts, enabled 2,500 safe elephant crossings—and recorded zero elephant deaths. Technology is also transforming how humans coexist with big cats. In Maharashtra's Tadoba-Andhari Tiger Reserve, AI-enabled cameras were installed on the edges of 13 villages starting in 2023. These motion-sensitive devices don't just record tiger activity—they analyse it, sending real-time alerts to villagers when tigers are nearby. The system has worked so well that it caught the attention of Prime Minister Modi, who mentioned the effort during the 110th episode of Mann Ki Baat.

Study calls for urgent conservation of Doon Valley rivers to check flood risks
Study calls for urgent conservation of Doon Valley rivers to check flood risks

Time of India

timea day ago

  • Time of India

Study calls for urgent conservation of Doon Valley rivers to check flood risks

Dehradun: The Suswa watershed in Doon Valley, which covers 310.9 sq km and forms part of the Song basin, requires urgent conservation to check soil erosion and promote sustainable land use, a new study published in the peer-reviewed journal Water has revealed. The Asan watershed, spanning 701.1 sq km with a westward flow, and the Song watershed, covering 1,040.5 sq km with an eastward flow, are the two key watersheds in the valley. The study titled 'Watershed Prioritisation with Respect to Flood Susceptibility in the Indian Himalayan Region (IHR) Using Geospatial Techniques for Sustainable Water Resource Management', compared the valley's watersheds and found sharp differences in drainage patterns and erosion risks. It flagged the Suswa watershed, that runs through Dehradun's urban centre, including IT Park, Raipur, Kanwali Road and Dudhli, as the most at risk. The study was carried out by a team of researchers, scientists and professors from the Wildlife Institute of India (WII), and Amity University. They used a "Compound Factor Value" (CFV) method to assess the watershed vulnerability. This approach combines slope, drainage density and other terrain factors into a single index. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like A Day in the Life at Siemens Energy Siemens Energy Read Now Undo A lower CFV signals higher risk, meaning greater priority for conservation. "Among the selected watersheds, CFVs ranged from 1.75 to 2.17, and Suswa ranked highest priority (1.75) due to its high erosion susceptibility, and the Song watershed lowest priority (2.17)," the study stated. This monsoon, four people lost their lives in just two weeks in rain-related incidents in Dehradun. The Suswa river, passing through the city, is under additional stress from sewage discharge, solid waste dumping, and encroachments, said the researchers. They called for conservation measures including "vegetative buffers, improved drainage management, and stream restoration initiatives" to reduce risk. The Song watershed is also vulnerable to flash floods because of steep slopes and high stream frequency and the researchers recommended interventions such as afforestation, check dams and slope stabilisation. The Asan watershed, home to Uttarakhand's only Ramsar-designated wetland, was ranked medium priority. Though more stable, the researchers warned "it could become more susceptible if left unmanaged due to development pressures and climate change." They stressed the need for "wetland protection, demarcation of buffer zones, and upstream land use control" to safeguard water inflow and quality. The research, based on high-resolution satellite data, revealed clear differences in drainage and erosion risk across the valley. "Its most significant contribution is the combination of geospatial data and morphometric parameters to develop a tiered watershed management framework for a data-scarce Himalayan region," said the authors. Lead author Ashish Mani, senior researcher at WII, said, "Future efforts should focus on afforestation, soil conservation in high-risk areas, sustainable land-use planning, flood mitigation, community engagement, and long-term monitoring using remote sensing and GIS. These steps will ensure effective watershed management, minimise environmental degradation, and enhance resilience against erosion and flooding in the Himalayan region. " He added that the findings support the United Nations' Sustainable Development Goals (SDGs) on clean water, climate action, and life on land. Stay updated with the latest local news from your city on Times of India (TOI). Check upcoming bank holidays , public holidays , and current gold rates and silver prices in your area. Get the latest lifestyle updates on Times of India, along with Happy Krishna Janmashtami Wishes ,, messages , and quotes !

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store