logo
Code Green

Code Green

It was burning bright. For three months, a four-year-old tiger roamed across 12 villages in Lucknow's Rehmankheda area, killing 25 animals and keeping residents on edge in the forest of the night. Daily life slowed as people stayed indoors, wary of the elusive predator that was a ghost with stripes. To track it down, forest officials took a blended approach—mixing traditional tracking methods with modern technology. They installed AI-powered thermal cameras at five key points and deployed three thermal drones to scan the forest canopy. On the ground, trained elephants Diana and Sulochana moved through dense undergrowth where vehicles couldn't go. Meanwhile, a wildlife expert in Bengaluru monitored live camera feeds, studying the tiger's patterns to anticipate its movements.
In March, came the breakthrough. AI cameras captured the tiger returning to a fresh kill. A ranger team was dispatched. A tranquiliser dart was fired, but the tiger fled, covering 500 metres before disappearing into thick foliage. Drones followed it from above, helping rangers close in for a second shot. Within 15 minutes, the animal was safely sedated. The 230 kg beast was then caged and transported to the Bakshi Ka Talab range office. The entire operation ended without a single human injury, thanks to the combined effort of AI surveillance, aerial tracking, and coordinated fieldwork.
In the past, conserving wildlife in India often meant navigating dense jungles with binoculars, spending months waiting for elusive animals to appear, or diving into the sea with nothing more than a net. Today, conservationists are adding something new to their toolkit: algorithms, thermal cameras, drones, and even genetic samplers. From the cold, high-altitude deserts of Ladakh to the lush mangroves of the Sundarbans, across coral reefs, tiger corridors, and railway tracks, a quiet revolution is unfolding. Technology is changing not only how we protect wildlife, but how we understand it.
In Ladakh, where the air is thin and snow leopards are more myth than mammal to most, a team of researchers set out to count the uncountable. 'Tough terrain and a lack of transport facilities were major challenges,' recalls Pankaj Raina from the Department of Wildlife Protection, Leh. 'We carried rations and equipment on ponies and set up temporary camps at subzero temperatures. Some places can only be accessed in winter, when the streams freeze. So, we'd place cameras one winter and return the next to collect them.' Over two years, they trekked more than 6,000 km and installed 956 camera traps across India's largest snow leopard habitat.
But their real challenge began only after they returned with nearly half a million images. No human team could sort through that volume of footage manually. So they turned to AI. A system called CaTRAT, trained to recognise Himalayan wildlife, scanned each frame to identify species. But something more precise was required. A second programme was deployed, this one trained to analyse forehead patterns, which are more reliable. 'Only the clearest image from each sequence was used,' explains Raina. 'These were digitised and processed through AI software that scored pattern similarities, creating a photographic library of each individual snow leopard.' The study, published in PLOS One earlier this year, revealed a hopeful truth: snow leopards in Ladakh are thriving. And for the first time, India now has a national photo library of snow leopards—a visual archive that will enable researchers to monitor individual animals.
Far to the south, in the forested corridor between Walayar and Madukkarai in Tamil Nadu, a different crisis was unfolding. Since 2008, 11 elephants had died in train collisions along a single seven-km-stretch of track. In 2024, the Coimbatore Forest Division responded by installing an AI-powered thermal surveillance system. The setup involved cameras that detect heat signatures in real-time, capable of spotting large mammals even in pitch darkness or heavy rain. The moment an elephant is detected near the tracks, the system sends instant alerts to train operators and forest teams. In its very first year, the system generated over 5,000 alerts, enabled 2,500 safe elephant crossings—and recorded zero elephant deaths.
Technology is also transforming how humans coexist with big cats. In Maharashtra's Tadoba-Andhari Tiger Reserve, AI-enabled cameras were installed on the edges of 13 villages starting in 2023. These motion-sensitive devices don't just record tiger activity—they analyse it, sending real-time alerts to villagers when tigers are nearby. The system has worked so well that it caught the attention of Prime Minister Modi, who mentioned the effort during the 110th episode of Mann Ki Baat.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI hallucinations: Building trust in age of confident lies
AI hallucinations: Building trust in age of confident lies

Hans India

timea day ago

  • Hans India

AI hallucinations: Building trust in age of confident lies

Artificial intelligence has reached levels of sophistication that were once unthinkable - excelling at problem-solving, language generation, and even creative design. Yet beneath these advances lies a challenge that refuses to fade: AI hallucinations. These aren't minor typos or harmless quirks - they're an inherent side-effect of how today's large language models work, and they demand serious attention as AI moves deeper into high-stakes domains. The question isn't whether AI will continue to hallucinate - evidence suggests it's intrinsic to current architectures. The real question is how quickly we can build the oversight and validation systems needed to enjoy AI's benefits while minimizing the risks of confident, convincing falsehoods. As artificial intelligence transforms industries from healthcare to finance, understanding and addressing the hallucination phenomenon becomes critical for maintaining public trust and ensuring beneficial outcomes. This represents more than a technical challenge - it's a fundamental question about how we integrate powerful but imperfect tools into systems that affect real lives. The opportunity before us is clear: proactive governance and robust validation frameworks can unlock AI's transformative potential while protecting against its inherent limitations. Understanding the Hallucination AI hallucinations occur when a model produces information that is false or fabricated but delivered with apparent confidence. Unlike human hallucinations, which involve perception, AI's fabrications are statistical guesses - generated when the model fills gaps in knowledge with patterns from its training data. The scope of this challenge is more extensive than many realize. In legal testing, Stanford researchers found that some AI legal tools hallucinated between 58 per cent and 82 per cent of the time when answering case-law queries - often inventing plausible-sounding citations. This isn't an occasional error; it's systematic unreliability in one of society's most precision-dependent fields. Perhaps most concerning is how confidently AI systems present false information. Research on 'high-certainty hallucinations' demonstrates that models can express extreme confidence while providing incorrect information, with studies showing AI systems maintain high certainty scores even when generating fabricated content. This creates a particularly dangerous dynamic where AI systems deliver false information with the same confidence level as accurate responses, making it difficult for users to distinguish between reliable and unreliable outputs. Why AI Creates False Realities The causes of AI hallucinations are embedded deep within the technology itself, making them challenging to eliminate through simple fixes or updates. Understanding these root causes is essential for developing effective mitigation strategies. Among the root causes, 'data gaps and bias' represent the most fundamental challenge. If the training data is incomplete or skewed, the model will fill in with similar patterns, which can introduce fabricated details. AI systems learn from vast datasets that inevitably contain inaccuracies, contradictions, and biases. When faced with queries that require information not well-represented in their training data, these systems extrapolate from similar but ultimately irrelevant examples, creating convincing but false responses. While architectural limitations create additional vulnerabilities, domain mismatch amplifies problems when general-purpose models encounter specialized contexts. Further, reasoning complexity also creates an unexpected paradox in AI development. Real-world consequences across critical sectors The risks of AI hallucinations extend far beyond academic exercises, creating tangible consequences across sectors that form the backbone of modern society. Legal systems face unprecedented challenges as AI-generated fake legal citations appear in filings, sometimes going unnoticed until late in proceedings. Healthcare applications present life-and-death implications when AI systems hallucinate medical information. Hallucinations in medical contexts can be dangerous, leading to incorrect diagnoses, inappropriate treatment recommendations, or false assurances about drug interactions. However, research into frameworks like CHECK, which grounds AI in clinical databases, offers hope - baseline hallucinations dropped to 0.3 per cent with structured retrieval systems. Further, business operations face direct financial and reputational consequences from AI hallucinations. How Industry Is Responding While exact spending figures remain proprietary, hallucination reduction is consistently cited among the top priorities for major AI labs. The industry's response reflects both the urgency of the challenge and the complexity of potential solutions. Retrieval-Augmented Generation (RAG) has emerged as one of the most promising approaches to check hallucinations while specialized domain datasets represent another critical intervention. Using vetted, structured, and diverse data to minimize bias and fill gaps helps create more reliable AI systems. Medical AI trained on carefully curated clinical data shows markedly lower hallucination rates than general-purpose systems, suggesting that domain-specific approaches can achieve higher reliability standards. Human-in-the-loop validation and reasoning verification layers are some of the other approaches. Paradox of Progress One of the most counterintuitive findings in recent AI research is that as reasoning ability improves, hallucination risk can also rise. Multi-step reasoning chains introduce more chances for errors to propagate, which explains why some cutting-edge reasoning models have higher hallucination rates than their predecessors. This paradox highlights a fundamental tension in AI development: capability and reliability don't always improve in sync. Advanced models that can solve complex mathematical problems or engage in sophisticated analysis may simultaneously be more prone to fabricating information in areas outside their expertise. This disconnect between capability and reliability makes robust safeguards essential, particularly as AI systems take on increasingly complex tasks. Building Trust Through Transparency Total elimination of hallucinations may be impossible given current AI architectures, so the focus must shift toward transparency and appropriate risk management. This approach acknowledges AI's limitations while maximizing its benefits through careful deployment and clear communication. The risk of hallucinations can be minimized through transparency initiatives, confidence scoring systems and domain-appropriate deployment. Responsible Deployment and Global Cooperation AI hallucinations represent a global challenge similar to cybersecurity or internet governance, demanding cross-border cooperation and coordinated responses. The interconnected nature of AI development and deployment means that solutions developed in one country can benefit users worldwide, while failures can have international implications. International collaboration can accelerate progress through shared datasets of known hallucinations, international evaluation standards, and collaborative research and development. Different countries' experiences with AI regulation and deployment provide valuable learning opportunities for developing effective approaches to hallucination mitigation. Effective changes in the AI education system and robust policy frameworks also hold the key. The Call to Action If we act now by investing in transparency, validation, and collaborative governance, we can ensure AI becomes a trustworthy partner rather than an unreliable narrator. The aim is not perfection, but partnership: pairing AI's scale and speed with human judgment to unlock potential while protecting against its flaws. This represents a critical moment in AI development. The choices made today about how to address hallucinations will shape public trust in AI for decades to come. By acknowledging these challenges honestly and implementing robust safeguards, we can build a future where AI enhances human capabilities without compromising truth and accuracy. The opportunity before us extends beyond solving a technical problem to creating a new model for human-machine collaboration. When AI systems acknowledge their limitations and humans provide appropriate oversight, the combination can achieve results that neither could accomplish alone. The choice is ours - proactive safeguards today, or costly corrections tomorrow. The time for action is now, while we still have the opportunity to shape AI's trajectory toward greater reliability and trustworthiness. That's the foundation for artificial intelligence that truly serves humanity's highest aspirations while respecting our fundamental need for truth and accuracy in an increasingly complex world. (Krishna Kumar is a Technology Explorer & Strategist based in Austin, Texas, USA. Rakshitha Reddy is AI Engineer based in Atlanta, Georgia, USA)

Code Green
Code Green

New Indian Express

timea day ago

  • New Indian Express

Code Green

It was burning bright. For three months, a four-year-old tiger roamed across 12 villages in Lucknow's Rehmankheda area, killing 25 animals and keeping residents on edge in the forest of the night. Daily life slowed as people stayed indoors, wary of the elusive predator that was a ghost with stripes. To track it down, forest officials took a blended approach—mixing traditional tracking methods with modern technology. They installed AI-powered thermal cameras at five key points and deployed three thermal drones to scan the forest canopy. On the ground, trained elephants Diana and Sulochana moved through dense undergrowth where vehicles couldn't go. Meanwhile, a wildlife expert in Bengaluru monitored live camera feeds, studying the tiger's patterns to anticipate its movements. In March, came the breakthrough. AI cameras captured the tiger returning to a fresh kill. A ranger team was dispatched. A tranquiliser dart was fired, but the tiger fled, covering 500 metres before disappearing into thick foliage. Drones followed it from above, helping rangers close in for a second shot. Within 15 minutes, the animal was safely sedated. The 230 kg beast was then caged and transported to the Bakshi Ka Talab range office. The entire operation ended without a single human injury, thanks to the combined effort of AI surveillance, aerial tracking, and coordinated fieldwork. In the past, conserving wildlife in India often meant navigating dense jungles with binoculars, spending months waiting for elusive animals to appear, or diving into the sea with nothing more than a net. Today, conservationists are adding something new to their toolkit: algorithms, thermal cameras, drones, and even genetic samplers. From the cold, high-altitude deserts of Ladakh to the lush mangroves of the Sundarbans, across coral reefs, tiger corridors, and railway tracks, a quiet revolution is unfolding. Technology is changing not only how we protect wildlife, but how we understand it. In Ladakh, where the air is thin and snow leopards are more myth than mammal to most, a team of researchers set out to count the uncountable. 'Tough terrain and a lack of transport facilities were major challenges,' recalls Pankaj Raina from the Department of Wildlife Protection, Leh. 'We carried rations and equipment on ponies and set up temporary camps at subzero temperatures. Some places can only be accessed in winter, when the streams freeze. So, we'd place cameras one winter and return the next to collect them.' Over two years, they trekked more than 6,000 km and installed 956 camera traps across India's largest snow leopard habitat. But their real challenge began only after they returned with nearly half a million images. No human team could sort through that volume of footage manually. So they turned to AI. A system called CaTRAT, trained to recognise Himalayan wildlife, scanned each frame to identify species. But something more precise was required. A second programme was deployed, this one trained to analyse forehead patterns, which are more reliable. 'Only the clearest image from each sequence was used,' explains Raina. 'These were digitised and processed through AI software that scored pattern similarities, creating a photographic library of each individual snow leopard.' The study, published in PLOS One earlier this year, revealed a hopeful truth: snow leopards in Ladakh are thriving. And for the first time, India now has a national photo library of snow leopards—a visual archive that will enable researchers to monitor individual animals. Far to the south, in the forested corridor between Walayar and Madukkarai in Tamil Nadu, a different crisis was unfolding. Since 2008, 11 elephants had died in train collisions along a single seven-km-stretch of track. In 2024, the Coimbatore Forest Division responded by installing an AI-powered thermal surveillance system. The setup involved cameras that detect heat signatures in real-time, capable of spotting large mammals even in pitch darkness or heavy rain. The moment an elephant is detected near the tracks, the system sends instant alerts to train operators and forest teams. In its very first year, the system generated over 5,000 alerts, enabled 2,500 safe elephant crossings—and recorded zero elephant deaths. Technology is also transforming how humans coexist with big cats. In Maharashtra's Tadoba-Andhari Tiger Reserve, AI-enabled cameras were installed on the edges of 13 villages starting in 2023. These motion-sensitive devices don't just record tiger activity—they analyse it, sending real-time alerts to villagers when tigers are nearby. The system has worked so well that it caught the attention of Prime Minister Modi, who mentioned the effort during the 110th episode of Mann Ki Baat.

Dy CM Ajit Pawar urges sugarcane farmers, sugar millers in Sangli to use AI tech
Dy CM Ajit Pawar urges sugarcane farmers, sugar millers in Sangli to use AI tech

United News of India

time2 days ago

  • United News of India

Dy CM Ajit Pawar urges sugarcane farmers, sugar millers in Sangli to use AI tech

West Sangli, Aug 16 (UNI) Maharashtra's Deputy Chief Minister, Ajit Pawar today urged sugarcane farmers and sugar millers in Sangli district to make effective use of AI technology. He noted that successful experiments using AI have been conducted in the agricultural sector, where AI is used effectively for saving water and fertiliser, and increasing tonnage. He also said that the district should take the initiative to hold an Agri-Hackathon, which would benefit fruit growers and other farmers. Addressing a review meeting of various development works and schemes at the district collector's office, he said that the administration should take people's representatives into confidence and prioritise solving the problems of the citizens. The Deputy CM also said that funds from the District Planning Committee should be spent on the right works and within the prescribed time. Pawar said attention should be paid to ensuring that the funds are not wasted and the works are of high quality. He also indicated that the relevant agencies should ensure that the funds received by the District Planning Committee for local self-government bodies do not remain unspent under any circumstances. UNI SSS RN

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store