Latest news with #DeepLearning

Hindustan Times
01-08-2025
- Science
- Hindustan Times
AI for climate resilience and environmental monitoring
India navigates the twin frontiers of our time, which are escalating the climate crisis and a fast-evolving technological revolution. It holds a powerful opportunity to lead the world in crafting climate resilience through Artificial Intelligence (AI). With the nation co-chairing major AI-environment task forces at the UN and G20, this is more than a moment of influence; it is a call to action. AI, when paired with satellite data, can be our eyes in the sky and our early warning system on the ground for tracking deforestation in real time, predicting floods before they strike, and holding polluters accountable with data-driven precision. But to unlock this promise, we must build a framework that is not just tech-savvy but also just, transparent, and accessible to all. The future of climate action is digital, and India has the chance to code it right. AI(REUTERS) According to the World Meteorological Organization (WMO) and other UN organisations, India suffers an estimated annual loss of around $87 billion due to climate-related disasters, which is a staggering figure that underscores the urgency of predictive and preventive climate action. From heatwaves in Delhi to deadly cyclones along the eastern coast, the impact is felt in each part of the nation with growing intensity. In this situation, AI, when used with remote sensing technologies and geospatial satellite data, can lead to a transformative change. AI can be a silent watcher, keeping an eye on ecosystems in real time and spotting illegal logging, shrinking mangroves, glacier retreat, and forest fires with unmatched speed and accuracy. Tools such as Google Earth Engine, along with indigenous systems like India's Bhuvan and RISAT satellites, generate the crucial data, which AI algorithms may swiftly process to flag environmental threats. Beyond this monitoring, the Machine Learning and Deep Learning Models can reimagine the manner in which we predict disasters today by analysing historical weather trends, soil conditions, and atmospheric changes to forecast floods, landslides, and cyclones, saving thousands of lives. AI is enhancing emissions tracking by monitoring pollution from factories, traffic, and agricultural practices in near real-time, which will ensure India's carbon accounting remains accurate and aligned with its Nationally Determined Contributions (NDCs) under the Paris Agreement. While AI holds great promise for climate resilience, making it a reality takes more than just technology and data. It demands a supportive ecosystem, one that includes real-world pilots, forward-looking policies, inclusive economic planning, and strong collaboration across sectors. Several promising case studies and strategic pathways show how India can lead by example. In Tamil Nadu, an AI-based flood forecasting model has already helped predict urban flooding with greater accuracy, aiding disaster preparedness in Chennai. AI-powered systems, like the one launched in the Pench Tiger Reserve Pantera in Maharashtra, can distinguish between smoke and clouds, reducing false alarms. These systems use infrared technology to detect fires both day and night, enabling 24x7 monitoring. AI is transforming emission tracking and climate resilience globally, from India's AI flood forecasting in Tamil Nadu and fire detection in Maharashtra to G20 innovations like the US's electric vehicles and Brazil's AI-driven deforestation monitoring, highlighting the need for supportive policies and cross-sector collaborations. In parallel, India can spearhead South-South collaboration to tailor AI models for tropical, drought-prone, and monsoon-affected landscapes. Under the G20's push for inclusive AI governance, building a Global South Working Group and a shared AI knowledge hub can democratise access to computing resources, datasets and regulatory best practices. Prime Minister (PM) Narendra Modi outlined India's AI vision at the G20, one that promotes inclusivity and global equity in addition to innovation. In order to guarantee that AI development is open, equitable, and available to all countries, not just a select few, he urged the establishment of strong international standards. According to Modi, ethical AI governance must put developing nations' particular needs first, enabling them to overcome historical obstacles and advance sustainable development, renewable energy, and climate resilience. To translate the promise of AI into a tangible impact for climate resilience, India must take a multi-pronged approach. First, integrating AI into national climate policy is crucial. Missions under the National Action Plan on Climate Change (NAPCC), such as those focused on Himalayan ecosystems and sustainable agriculture, offer fertile ground for AI-powered scale-ups. With tools like satellite imaging, predictive analytics, and remote sensing, these missions can benefit from sharper decision-making and real-time responsiveness. Second, institutional capacity must be strengthened. Platforms like NITI Aayog, IndiaAI, the Global Partnership on Artificial Intelligence (GPAI), and NEERI's Sustainovate 2025 can catalyse mentorship and scalable innovation. Third, India must actively launch supportive pilots and regulatory frameworks. Successful models like AI-led flood forecasting in Chennai, heat vulnerability mapping in Delhi, and wildfire detection in the Pench Reserve must be scaled across other states through inclusive funding and smart governance mechanisms. Equally important is the need to safeguard transparency and equity. This means building open-access AI data ecosystems, mandating climate impact disclosures, embedding community-driven indices into AI decision frameworks, and ensuring that marginalised groups are neither excluded nor further disadvantaged. Finally, India must champion South–South collaboration. By operationalising PM Modi's G20 satellite mission proposal, India can help pool sensing, processing, and AI resources to create a shared digital public good for the Global South. This will not only democratise access to cutting-edge climate technologies but also foster a more equitable, cooperative, and resilient planetary future. Innovation alone is not enough; it must be backed by strong institutional will. For India to lead in AI-driven climate resilience, it must take decisive policy steps. First, the government should incentivise the development of clean and sustainable AI infrastructure through targeted subsidies, green procurement policies, and energy-efficient data centres. Second, fostering cross-border AI collaboration through platforms like the G20, GPAI, and South–South partnerships is essential to share knowledge, tools, and technologies tailored to diverse climatic challenges. Finally, India must embed data justice into its AI frameworks by ensuring that socio-environmental equity becomes a foundational principle in AI design, deployment, and governance. The future of climate action is digital, and India now stands at a pivotal moment to code that future with foresight, fairness, and purpose. This article is authored by Tauseef Alam, research lead, Rajya Sabha and Zainab Fatima, student, Banaras Hindu University.


Time of India
08-07-2025
- Health
- Time of India
IIT Delhi announces 6-month online executive programme focused on AI in Healthcare: Check details here
The Indian Institute of Technology (IIT) Delhi, in partnership with TeamLease EdTech, has introduced a comprehensive online executive programme in Artificial Intelligence (AI) in Healthcare, specially designed for working professionals across diverse domains. Scheduled to begin on November 1, 2025, this programme seeks to bridge the gap between healthcare and technology by imparting industry-relevant AI skills to professionals, including doctors, engineers, data scientists, and med-tech entrepreneurs. Applications for the programme are currently open and will remain so until July 31, 2025. Interested professionals are encouraged to submit their applications through the official IIT Delhi CEP portal. This initiative is a part of IIT Delhi 's eVIDYA platform, developed under the Continuing Education Programme (CEP), and aims to foster applied learning through a blend of theoretical instruction and hands-on experience using real clinical datasets. This course offers a unique opportunity to upskill with one of India's premier institutes and contribute meaningfully to the rapidly evolving field of AI-powered healthcare. Programme overview To help prospective applicants plan better, here is a quick summary of the programme's key details: Category Details Course duration November 1, 2025 – May 2, 2026 Class schedule Online and conducted over weekends Programme fee ₹1,20,000 + 18% GST (Payable in two easy installments) Application deadline July 31, 2025 Learning platform IIT Delhi Continuing Education Programme (CEP) portal Who can benefit from this course? The programme is tailored for a wide spectrum of professionals who are either involved in healthcare or aspire to work at the intersection of health and technology. You are an ideal candidate if you are: • A healthcare practitioner or clinician with limited or no background in coding or artificial intelligence, but curious to explore AI's applications in medicine. • An engineer, data analyst, or academic researcher engaged in health-tech innovations or biomedical computing. • A med-tech entrepreneur or healthcare startup founder looking to incorporate AI-driven solutions into your business or products. Curriculum overview Participants will engage with a carefully curated curriculum that balances core concepts with real-world applications. Key modules include: • Introduction to AI, Machine Learning (ML), and Deep Learning (DL) concepts. • How AI is used to predict disease outcomes and assist in clinical decision-making. • Leveraging AI in population health management and epidemiology. • Application of AI for hospital automation and familiarity with global healthcare data standards like FHIR and DICOM. • Over 10 detailed case studies showcasing successful AI applications in hospitals and clinics. • A hands-on project with expert mentorship from faculty at IIT Delhi and clinicians from AIIMS, enabling learners to apply their knowledge to real clinical challenges. Learning outcomes you can expect By the end of this programme, participants will be equipped with the ability to: • Leverage AI technologies to enhance clinical workflows, automate processes, and support evidence-based decision making in healthcare. • Work effectively with diverse data sources such as Electronic Medical Records (EMRs), radiology images, genomics data, and Internet of Things (IoT)-based health devices. • Develop and deploy functional AI models tailored for practical use in hospitals, diagnostics, and public health infrastructure. • Earn a prestigious certification from IIT Delhi, enhancing your professional credentials in the health-tech domain. Ready to navigate global policies? Secure your overseas future. Get expert guidance now!


Coin Geek
11-06-2025
- Science
- Coin Geek
China launches first AI-based nuclear warhead inspector
Getting your Trinity Audio player ready... A group of Chinese researchers has unveiled a solution to detect real nuclear warheads from decoys to streamline processes for global arms verification. According to a report, the solution leverages artificial intelligence (AI) to inspect nuclear warheads, distinguishing real nukes from replicas. Dubbed the 'Verification Technical Scheme for Deep Learning Algorithm Based on Interactive Zero Knowledge Protocol,' the AI-based solution has recorded many successes in early tests. The AI-powered system can identify nukes without accessing classified military data in the warheads. Researchers achieved this by creating a 400-hole polythene wall between the AI-based inspector and the nuclear warhead to scramble neutron signals, but still allowing radiation signals to pass through. Furthermore, the system employed cryptography and nuclear physics combined with Monte Carlo simulations for training. The Chinese researchers trained the model on millions of nuclear warhead components with a cross-section containing a mix of radioactive materials. The researchers noted that the AI system showed promise in spotting chain reaction capability, a clear indicator of nuclear capability. 'In nuclear warhead component verification for arms control, it is critical to ensure that sensitive weapon design information is not acquired by inspectors while maintaining verification effectiveness,' said the research team. The team identified a raft of challenges militating against the design in their research paper. Firstly, training the AI using real nuclear warheads proved to be a major hassle. Secondly, the researchers faced the challenge of convincing top Chinese military officials that the AI model would not leak classified technology secrets. The researchers say that convincing the U.S. to ditch outdated nuclear verification methods is also proving to be an issue. AI is playing an increasing role in global militaries Global armed forces increasingly turn to emerging technologies to enhance their national defense capabilities. Turkey has unveiled an AI-powered tool for classifying terrorism activities, while the Japanese military has confirmed a seven-step plan for full AI integration into its processes. Despite the frenzied approach, several partnerships are emerging for responsible AI use in militaries. The U.S. and Nigeria have struck a bilateral agreement for safe military use cases for AI, while China and Russia have entered a similar collaboration with international best practices at the core. China targets an AI application cooperation center in partnership with SCO nations As China's local AI grows, the Asian superpower has its eyes on improving the pace of digitization of its neighbors. China is eyeing the launch of an AI application cooperation center with the Shanghai Cooperation Organization (SCO). The SCO, founded in 2001, comprises Russia, Kazakhstan, Kyrgyzstan, Tajikistan, and Uzbekistan. China is leveraging the SCO's founding document to push for increased levels of cooperation in emerging technologies. To achieve this, China will establish an AI cooperation center to foster regional collaboration on real-world AI applications. The center will support AI research and development among SCO nations via open-source models and cross-border information-sharing systems if established. The center will back the development of sovereign AI models while powering the development of uniform regional standards. The report mentions an ambitious plan to place sustainability at the heart of the regional AI development efforts. Further plans are to set up personnel exchanges and cross-border regional AI training to support AI development in local ecosystems. 'Leveraging multilateral platforms like the SCO, China can help the developing countries to adopt cutting-edge technologies such as AI, contributing to a more multipolar and equitable global technology framework,' said Zhang Hong, a research fellow at the Chinese Academy of Social Sciences. China has already taken the first step, pitching the plans to SCO member states at the 2025 China-SCO AI Cooperation Forum. Previously, China has made efforts to set up minimum global AI standards with a UN resolution garnering significant support from member states. Among SCO member states, China has the most advanced AI ecosystem, with Chinese LLMs like DeepSeek challenging the dominance of Western counterparts. In terms of regulation, Chinese administrators have rolled out strict guidelines for AI applications in schools and other key sectors of the economy. SCO member states will build on existing AI initiatives SCO nations have made considerable strides with AI in their local economies. While limited in its research, Russia has partnered with China to explore safe military applications of AI technologies. After setting up a special committee to oversee local AI development, Kazakhstan has signed an MoU with South Korea for public sector AI use. Kyrgyzstan is supercharging the Middle Corridor digitization efforts with a AI and Web3 technologies. In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek's coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI. Watch: AI is for 'augmenting' not replacing the workforce title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="">


TechCrunch
03-06-2025
- Entertainment
- TechCrunch
TC Sessions: AI Trivia Countdown — score 2-for-1 tickets
Think you know which AI assistant was the first to use natural language processing for everyday tasks? Or which researcher coined the term 'deep learning,' revolutionizing AI? If so, this is your chance to win two TechCrunch Sessions: AI tickets for a discounted rate of one. Test your AI smarts — win a chance to attend TC Sessions: AI Crush a few quick AI trivia questions, and if you come out on top, check your inbox — a special deal might be waiting for you. Each day brings a new round of questions, so don't sweat it if today stumps you — there's still time to play. But act fast — the trivia ends tomorrow, June 4, and you won't want to miss this shot at discounted access to the biggest conversations in AI. How to play Step 1: Take today's AI Trivia Countdown quiz. Step 2: Check your inbox to see if you've scored the special code. Step 3: Use the code to grab 2-for-1 tickets to TechCrunch Sessions: AI. Don't just keep up with AI — be part of it on June 5 at UC Berkeley's Zellerbach Hall. Play the trivia. Win the deal.


The Star
31-05-2025
- Science
- The Star
China unveils world's first AI nuke inspector
Chinese scientists have developed an artificial intelligence system that can distinguish real nuclear warheads from decoys, marking the world's first AI-driven solution for arms control verification. The technology, disclosed in a peer-reviewed paper published in April by researchers with the China Institute of Atomic Energy (CIAE), could bolster Beijing's stance in stalled international disarmament talks while fuelling debate on the role of AI in managing weapons of mass destruction. The project, which is built on a protocol jointly proposed by Chinese and American scientists more than a decade ago, faced three monumental hurdles. These were – training and testing the AI using sensitive nuclear data (including real warhead specifications); convincing Chinese military leaders that the system would not leak tech secrets; and persuading sceptical nations, particularly the United States, to abandon Cold War-era verification methods. So far, only the first step has been cleared. 'Due to the classified nature of nuclear warheads and component designs, specific data cannot be disclosed here,' the CIAE team wrote in their Atomic Energy Science and Technology paper. The admission highlights the delicate balance between scientific transparency and inevitable opacity around nuclear arms control efforts. The AI verification protocol, dubbed 'Verification Technical Scheme for Deep Learning Algorithm Based on Interactive Zero Knowledge Protocol', employs a multiple-stage process blending cryptography and nuclear physics. Using Monte Carlo simulations, researchers generated millions of virtual nuclear components – some containing weapons-grade uranium, others disguised with lead or low-enriched materials. A many-layer deep learning network was trained on neutron flux patterns, achieving extremely high accuracy in distinguishing real warheads. To prevent the AI gaining direct access to top-secret nuclear weapon design, a 400-hole polythene wall was erected between the inspection system and real warhead, scrambling neutron signals and masking warhead geometries while allowing radiation signatures to pass. If inspectors and host nations engage in several rounds of randomised verification, deception odds can be reduced to nearly zero, according to the study. The system's linchpin lies in its ability to verify chain-reaction capability – the essence of nuclear weapons – without exposing design details. The AI knows nothing about the warhead's engineering, but it can still determine authenticity through partially obscured radiation signals. CIAE, a subsidiary of the China National Nuclear Corporation (CNNC), serves as a critical research hub for nuclear weapons technology. Yu Min, a nuclear physicist from the institute, pioneered groundbreaking advancements in miniaturising China's nuclear arsenal, devising unique technical solutions that earned him the revered title of 'Father of China's Hydrogen Bomb'. The disclosure arrives amid frozen US-China nuclear negotiations. While US President Donald Trump repeatedly sought to restart talks, Beijing has resisted, citing disparities in arsenal sizes (China's estimated 600 warheads vs America's 3,748) and distrust of legacy verification methods. 'In nuclear warhead component verification for arms control, it is critical to ensure that sensitive weapon design information is not acquired by inspectors while maintaining verification effectiveness,' the CIAE team wrote. 'Current solutions primarily rely on information barrier methods developed by national laboratories in Britain, the United States and Russia. These barriers constitute complex automated systems that process highly classified measurement data during inspections, ultimately displaying only binary 'yes/no' results. 'However, such systems suffer from multiple drawbacks: their inherent complexity demands mutual trust between inspecting and inspected parties against hidden back doors, while excessive dependence on electronic systems creates vulnerabilities for potential exploitation of electronic/IT back doors to illicitly access sensitive information,' they added. To ensure thrust and transparency, the CIAE team said that the AI could be jointly coded, trained and verified by the inspecting and inspected party. Before testing the nuclear warheads, the AI deep learning software 'must be sealed', they said. The technology's unveiling coincides with heightened global anxiety over AI militarisation. While Washington and Beijing have jointly banned AI from nuclear launch decisions, the construction and deployment of large-scale smart defence infrastructure such as the Golden Dome proposed by the Trump administration would inevitably employ AI to guide or even control automated weapons to achieve quick response on a global scale. -- South China Morning Post