21-05-2025
How Generative AI Is Shaping Public Discourse During Geopolitical Tensions
"Specifically, adversaries are deploying Natural Language Generation (NLG) models to produce synthetic news articles, social media posts, and deepfake multimedia content that appear highly credible and are designed to sow discord or confusion among the population," Ankush Sabharwal, Founder and CEO, CoRover
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
The scale and sophistication of AI-generated misinformation in India is rising sharply. In 2025 alone, the country is projected to lose INR 70,000 crore to deepfake-related frauds, according to Pi-Labs' "Digital Deception Epidemic" report. Since 2019, deepfake-linked cybercrime cases have surged by over 550 per cent. As geopolitical tensions escalate, generative AI (GenAI) is increasingly being misused to manipulate narratives, and stir panic in public.
With social media platforms transcending borders and verification frameworks, the spread of fake content often indistinguishable from reality has become alarmingly easy. Now the question is how prepared is India to defend itself from these emerging threats?
A new age of information warfare
Experts warn that both state and non-state actors are leveraging GenAI to launch influence campaigns during moments of national vulnerability.
"Hostile actors are leveraging AI-driven technologies, particularly generative models and deep learning algorithms, to manipulate public narratives in India. These actors utilise AI for large-scale content generation, sentiment manipulation, and disinformation amplification across digital platforms," shared Ankush Sabharwal, Founder and CEO, CoRover.
"Specifically, adversaries are deploying Natural Language Generation (NLG) models to produce synthetic news articles, social media posts, and deepfake multimedia content that appear highly credible and are designed to sow discord or confusion among the population."
Sabharwal added that these campaigns often exploit regional languages, cultural sensitivities, and fault lines through micro-targeting, especially during elections or military flare-ups.
A race against time
While the volume of synthetic content is accelerating, India's cyber defence systems are attempting to keep pace.
"India has made commendable progress through initiatives like the Indian Computer Emergency Response Team (CERT-In), which employs AI/ML-based threat intelligence systems to monitor anomalies and respond to cyber incidents proactively," Sabharwal noted.
Yet vulnerabilities remain, particularly in coordination and speed. "Although India has made great progress in strengthening its cyber defences and media integrity, the speed at which AI-driven misinformation is spreading necessitates more flexible and comprehensive countermeasures," explained Ankit Sharma, Senior Director and Head, Solutions Engineering, Cyble.
Further Sharma added, "India's digital borders need dynamic, real-time monitoring with quick takedown mechanisms, just like its physical borders, which are protected by layered surveillance."
High risk in tier 2 and tier 3 regions
Sharma also believes that misinformation and deep fakes pose greater danger in Tier-2 and Tier-3 regions where digital literacy remains low, but smartphone and social media usage is high. "The risk is extremely high," said Sharma. "When AI-created disinformation, particularly voice recordings or videos in local languages is distributed into these communities, the information is ingested without fact-checking…One effectively written AI-fabricated instance of misinformation can trigger real-world effects ranging from social tension to election manipulation or cyberattacks on critical infrastructure," he noted.
The legal void
There is growing consensus that India urgently needs a dedicated legal framework to combat AI-powered fake news especially in times of national crisis.
"India needs, at an urgent level, a legal framework to deal with the weaponisation of AI in the information space, particularly in the event of war or national emergencies," Sharma emphasised. Further he said that the current IT laws are at times inadequate in dealing with the sophistication of synthetic media, transnational attribution, or algorithmic amplification.
While concluding Sabharwal said, "A framework should mandate AI watermarking, source traceability, and accountability for hosting platforms while ensuring innovation isn't stifled."