
Local content, stories should be incentivised: Ashwini Vaishnaw at WAVES
Addressing the Global Media Dialogue at the World Audio, Visual and Entertainment Summit (WAVES) here, he said tie-ups between government, industry and creators had become indispensable as there was greater focus on local stories.
Vaishnaw said global media dialogue was anchored in creativity, culture and collaboration. 'As a government, we must provide a fair chance for everyone to showcase their story to the world. We must incentivise local content promotion and enforce IP framework among other things,' Vaishnaw said, addressing the event, which was attended by external affairs minister S Jaishankar, I&B minister of state L Murugan, and representatives from several countries.
Prime Minister Narendra Modi on Thursday inaugurated WAVES, a first of its kind summit in India, with the tagline of 'Connecting Creators, Connecting Countries'. Participants from over 90 countries, with more than 10,000 delegates, 1,000 creators, over 300 companies, and over 350 startups expected to attend the event.
Emphasising on the role of governments, Vaishnaw said that governments must support policies that preserve and promote all cultural forms as they connect people across borders.
'Focus is shifting towards local stories. We aim to build people-to-people and country-to-country exchanges,' he said.
'Tie-ups with government, industry and creators have, therefore, become indispensable. Practical steps include co-production treaties to ease licenses and talent movement. We need joint funds for new tech, shared standards, and clear rules for ethical AI,' Vaishnaw said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
21 minutes ago
- Mint
Inside India's two-track strategy to become an AI powerhouse
Bengaluru: At Google's annual I/O Connect event in Bengaluru this July, the spotlight was on India's AI ambitions. With over 1,800 developers in attendance, the recurring theme echoing across various panel discussions, product announcements and workshops was that of building AI capability for India's linguistic diversity. With 22 official languages and hundreds of spoken dialects, India faces a monumental challenge in building AI systems that can work across this multilingual landscape. In the demo area of the event, this challenge was front and centre, with startups showcasing how they're tackling it. Among those were Sarvam AI, demonstrating Sarvam-Translate, a multilingual model fine-tuned on Google's open-source large language model (LLM), Gemma. Next to it, CoRover demonstrated BharatGPT, a chatbot for public services such as the one used by the Indian Railway Catering and Tourism Corporation (IRCTC). At the event, Google announced that AI startups Sarvam, Soket AI and Gnani are building the next generation of India AI models, fine-tuning them on Gemma. At first glance, this might seem contradictory. Three of these startups are among the four selected to build India's sovereign large language models under the ₹10,300 crore IndiaAI Mission, a government initiative to develop home-grown foundational models from scratch, trained on Indian data, languages and values. So, why Gemma? Building competitive models from scratch is a resource-heavy task involving multiple challenges and India does not have the luxury of building from scratch, in isolation. With limited high-quality training datasets, an evolving compute infrastructure and urgent market demand, the more pragmatic path is to start with what is available. These startups are therefore taking a layered approach, fine-tuning open-source models to solve real-world problems today, while simultaneously building the data pipelines, user feedback loops and domain-specific expertise needed to train more indigenous and independent models over time. Fine-tuning involves taking an existing large language model already trained on vast amounts of general data and teaching it to specialize further on focused and often local data, so that it can perform better in those contexts. Build and bootstrap Project EKA, an open-source community driven initiative led by Soket, is a sovereign LLM effort, being developed in partnership with IIT Gandhinagar, IIT Roorkee and IISc Bangalore. It is being designed from scratch by training code, infrastructure and data pipelines, all sourced within India. A 7 billion-parameter model is expected in the next four-five months, with a 120 billion-parameter model planned over a 10-month cycle. 'We've mapped four key domains: agriculture, law, education and defence," says Abhishek Upperwal, co-founder of Soket AI. 'Each has a clear dataset strategy, whether from government advisory bodies or public-sector use cases." A key feature of the EKA pipeline is that it is entirely decoupled from foreign infrastructure. Training happens on India's GPU cloud and the resulting models will be open-sourced for public use. The team, however, has taken a pragmatic approach, using Gemma to run initial deployments. 'The idea is not to depend on Gemma forever," Upperwal clarifies. 'It's to use what's there today to bootstrap and switch to sovereign stacks when ready." CoRover's BharatGPT is another example of this dual strategy in action. It currently runs on a fine-tuned model, offering conversational agentic AI services in multiple Indian languages to various government clients, including IRCTC, Bharat Electronics Ltd, and Life Insurance Corporation. 'For applications in public health, railways and space, we needed a base model that could be fine-tuned quickly," says Ankush Sabharwal, CoRover's founder. 'But we have also built our own foundational LLM with Indian datasets." Like Soket, CoRover treats the current deployments as both service delivery and dataset creation. By pre-training and fine-tuning Gemma to handle domain-specific inputs, it is trying to improve accessibility today while building a bridge to future sovereign deployments. 'You begin with an open-source model. Then you fine-tune it, add language understanding, lower latency and expand domain relevance," Sabharwal explains. 'Eventually, you'll swap out the core once your own sovereign model is ready," he adds. Amlan Mohanty, a technology policy expert, calls India's approach an experiment in trade-offs, betting on models such as Gemma to enable rapid deployment without giving up the long-term goal of autonomy. 'It's an experiment in reducing dependency on adversarial countries, ensuring cultural representation and seeing whether firms from allies like the US will uphold those expectations," he says. Mint reached out to Sarvam and Gnani with detailed queries regarding their use of Gemma and its relevance to their sovereign AI initiatives, but the companies did not respond. Why local context is critical For India, building its own AI capabilities is not just a matter of nationalistic pride or keeping up with global trends. It's more about solving problems that no foreign model can adequately address today. Think of a migrant from Bihar working in a cement factory in rural Maharashtra, who goes to a local clinic with a persistent cough. The doctor, who speaks Marathi, shows him a chest X-ray, while the AI tool assisting the doctor explains the findings in English, in a crisp Cupertino accent, using medical assumptions based on Western body types. The migrant understands only Hindi and much of the nuance is lost. Far from being just a language problem, it's a mismatch in cultural, physiological and contextual grounding. A rural frontline health worker in Bihar needs an AI tool that understands local medical terms in Maithili, just as a farmer in Maharashtra needs crop advisories that align with state-specific irrigation schedules. A government portal should be able to process citizen queries in 15 languages with regional variations. These are high-impact and everyday use cases where errors can directly affect livelihoods, functioning of public services and health outcomes. Fine-tuning open models gives Indian developers a way to address these urgent and ground-level needs right now, while building the datasets, domain knowledge and infrastructure that can eventually support a truly sovereign AI stack. This dual-track strategy is possibly one of the fastest ways forward, using open tools to bootstrap sovereign capacity from the ground up. 'We don't want to lose the momentum. Fine-tuning models like Gemma lets us solve real-world problems today in applications such as agriculture or education, while we build sovereign models from scratch," says Soket AI's Upperwal. 'These are parallel but separate threads," says Upperwal. 'One is about immediate utility, the other about long-term independence. Ultimately these threads will converge." A strategic priority The IndiaAI Mission is a national response to a growing geopolitical issue. As AI systems become central to education, agriculture, defence and governance, over-reliance on foreign platforms raises the risks of data exposure and loss of control. This was highlighted last month when Microsoft abruptly cut off cloud services to Nayara Energy after European Union sanctions on its Russian-linked operations. The disruption, which was reversed only after a court intervention, raised alarms on how foreign tech providers can become geopolitical pressure points. Around the same time, US President Donald Trump doubled tariffs on Indian imports to 50%, showing how trade and tech are increasingly being used as leverage. Besides reducing dependence, sovereign AI systems are also important for India's critical sectors to accurately represent local values, regulatory frameworks and linguistic diversity. Most global AI models are trained on English-dominant and Western datasets, which make them poorly equipped to handle the realities of India's multilingual population or the domain-specific complexity of its systems. This becomes a challenge when it comes to applications such as interpreting Indian legal judgments or accounting for local crop cycles and farming practices in agriculture. Mohanty says that sovereignty in AI isn't about isolation, but about who controls the infrastructure and who sets the terms. 'Sovereignty is basically about choice and dependencies. The more choice you have, the more sovereignty you have." He adds that full-stack independence from chips to models is not feasible for any country, including India. Even global powers such as the US and China balance domestic development with strategic partnerships. 'Nobody has complete sovereignty or control or self-sufficiency across the stack, so you either build it yourself or you partner with a trusted ally." Mohanty also points out that the Indian government has taken a pragmatic approach by staying agnostic to the foundational elements of its AI stack. This stance is shaped less by ideology and more by constraints such as lack of Indic data, compute capacity and ready-made open-source alternatives built for India. India's data lacunae Despite the momentum behind India's sovereign AI push, the lack of high-quality training data, particularly in Indian languages, continues to be one of its most fundamental roadblocks. While the country is rich in linguistic diversity, that diversity has not translated into digital data that AI systems can learn from. Manish Gupta, director of engineering at Google DeepMind India, cited internal assessments that found that 72 of India's spoken languages, which had over 100,000 speakers, had virtually no digital presence. 'Data is the fuel of AI and 72 out of those 125 languages had zero digital data," he says. To address this linguistic challenge for Google's India market, the company launched Project Vaani in collaboration with the Indian Institute of Science (IISc). This initiative aims to collect voice samples across hundreds of Indian districts. The first phase captured over 14,000 hours of speech data from 80 districts, representing 59 languages, 15 of which previously had no digital datasets. The second phase expanded coverage to 160 districts and future phases aim to reach all 773 districts in India. 'There's a lot of work that goes into cleaning up the data, because sometimes the quality is not good," Gupta says, referring to the challenges of transcription and audio consistency. Google is also developing techniques to integrate these local language capabilities into its large models. Gupta says that learnings from widely spoken languages such as English and Hindi are helping improve performance in lower-resource languages such as Gujarati and Tamil, largely due to cross-lingual transfer capabilities built into multilingual language models. The company's Gemma LLM incorporates Indian language capabilities derived from this body of work. Gemma ties into LLM efforts run by Indian startups through a combination of Google's technical collaborations, infrastructure guidance and by making its collected datasets publicly available. According to Gupta, the strategy is driven by both commercial and research imperatives. India is seen as a global testbed for multilingual and low-resource AI development. Supporting local language AI, especially through partnerships with startups such as Sarvam, Soket AI and allows Google to build inclusive tools that can scale beyond India to include other linguistically complex regions in Southeast Asia and Africa. For India's sovereign AI builders, the lack of readymade and high-quality Indic datasets means that model development and dataset creation must happen in parallel. For the Global South India's layered strategy to use open models now, while concurrently building sovereign models, also offers a roadmap for other countries navigating similar constraints. It's a blueprint for the Global South, where nations are wrestling with the same dilemma on how to build AI systems that reflect local languages, contexts and values without the luxury of vast compute budgets or mature data ecosystems. For these countries, fine-tuned open models offer a bridge to capability, inclusion, and control. 'Full-stack sovereignty in AI is a marathon, not a sprint," Upperwal says. 'You don't build a 120 billion model in a vacuum. You get there by deploying fast, learning fast and shifting when ready." Singapore, Vietnam and Thailand are already exploring similar methods, using Gemma to kickstart their local LLM efforts. By 2026, when India's sovereign LLMs, including EKA, are expected to be production-ready, Upperwal says the dual track will likely converge, and bootstrapped models will fade while homegrown systems may take their place. But even as these startups build on open tools such as Meta's Llama or Google's Gemma, which are engineered by global tech giants, the question of dependency continues to loom. Even for open-source models, control over architecture, training techniques and infrastructure support still leans heavily on Big Tech. While Google has open-sourced speech datasets, including Project Vaani, and extended partnerships with IndiaAI Mission startups, the terms of such openness are not always symmetrical. India's sovereign plans, therefore, depend not on shunning open models but on eventually outgrowing them. 'If Google is directed by the US government to close down its weights (model parameters), or increase API (application programming interface) prices or change transparency norms, what would the impact be on Sarvam or Soket?" questions Mohanty, adding that while the current India-US tech partnership is strong, future policies could shift and jeopardize India's digital sovereignty. In the years ahead, India and other nations in the Global South will face a critical question over whether they can convert this borrowed support into a complete, sovereign AI infrastructure, before the terms of access shift or the window to act closes.


Indian Express
an hour ago
- Indian Express
Military Digest: Pakistan Army Chief Asim Munir's thirst for military recognition mirrors that of dictators from the past
Pakistan Army Chief Field Marshal Asim Munir has been in the news recently after having been awarded the second-highest military gallantry honour of Pakistan, Hilal-e-Jurat. That puts him in the pantheon of many dictators from the past, including Field Marshal Ayub Khan, Field Marshal Idi Amin and Saddam Hussein, who had not been shy of awarding themselves top medals and ranks. While such medals are awarded in the name of the President of a country, the citations for the same originate from military headquarters. For a gallantry medal to be awarded to a soldier who has been in action against the enemy, the citation must be vetted by the chain of command, which usually begins with the commanding officer. Several military dictators throughout history have awarded themselves gallantry awards or other high honours, often as a means of self-aggrandisement or to bolster their image as heroic leaders. Prominent among them is Idi Amin, the dictator of Uganda, who declared himself a Field Marshal in 1975 and awarded himself numerous self-created military decorations, including the Victorious Cross (a fabricated award mimicking the British Victoria Cross). He also claimed titles like 'Conqueror of the British Empire' to enhance his image, despite lacking any significant military achievements to justify such honours. Former Pakistani military dictator, Mohammad Ayub Khan, who ruled the country from 1958 to 1969, appointed himself the rank of Field Marshal in 1959, a year after seizing power through a military coup. Like Asim Munir, Ayub Khan also ensured that he was awarded the Hilal-i-Jurat in the late 1950s, after its institution, for his 'bravery' in 1949 against the Indian Army. Iraqi dictator Saddam Hussein also frequently awarded himself military honours and titles, including the rank of Field Marshal and the highest award of the Order of the Two Rivers, named after the Tigris and Euphrates rivers, to project an image of a valiant leader. These awards were often tied to his propaganda efforts during conflicts like the Iran-Iraq War, despite his role being more strategic than directly combative. There are several other examples in history of leaders having awarded themselves grand titles and awards, and these include Joseph Stalin of the USSR, Nicolae Ceaușescu of Romania, Kim Jong-il and his son Kim Jong-un of North Korea and Francisco Franco of Spain. Most of these dictators used awards to legitimise their rule, enhance their public image, or create a cult of personality. Gallantry awards, typically reserved for acts of bravery in combat, were often misused by these leaders to symbolise their leadership or fabricated heroism. In his seminal work On The Psychology of Military Incompetence, British psychologist Norman Dixon pens a picture that gives an insight into the personality of those like Asim Munir. '……the man who reaches a position of high command out of a compulsive thirst for personal advancement will tend to lack that creative talent and flexibility of mind so necessary in modern warfare,' Dixon writes.


Time of India
an hour ago
- Time of India
An I-Day walk through Kolkata's past
When it comes to India's independence movement, Kolkata's big names often steal the limelight. But tucked behind bustling streets and in the shade of old trees lie corners that once crackled with revolutionary energy. Tired of too many ads? go ad free now Some smelled of coffee and conversation, others hid bombs behind slogans, and a few carried grim echoes of the gallows. This Independence Day, let's wander into the city's lesser-known addresses that helped script history—often in whispers, sometimes in shouts. Indian Coffee House: Caffeine, conspiracies, and lots of courage Heritage enthusiast Vibha Mitra says, 'By the late 19th century, this café brewed rebellion—freedom fighters like Netaji Subhas Chandra Bose & Sri Aurobindo used its tables as command centres. Later, icons like Satyajit Ray, Amartya Sen, and Sunil Gangopadhyay kept bold ideas alive over endless cups. Spots for your I-Day trail Jugantar Headquarters where Khudiram bose and Prafulla Chaki planned operations Raja Subodh Mullick Square- Wellington Square Federation Hall, APC Road or Milan Bhavan B.B.D Bagh-Dalhousie Square -Manjit Singh Hoonjan of Calcutta Photo Tours Led by Barindra Kumar Ghosh, the Muraripukur Bagan Bari was used to hide a bomb workshop behind devotional singing sessions – Vibha Mitra, heritage enthusiast Dalhousie Square; Sukrit Sen, heritage enthusiast & city walks host says that The Phansi Lane, located near Governor's House, was the designated site for public executions during colonial times Cabins linked with the INM Dilkhusha Cabin Favourite cabin Paramount Swadhin Bharat Hindu Hotel