logo
UNESCO, MeitY launch exercise to assess India's AI readiness

UNESCO, MeitY launch exercise to assess India's AI readiness

Hindustan Times06-06-2025
The United Nations Educational, Scientific and Cultural Organisation (UNESCO), the ministry of electronics and information technology (MeitY), and law firm Ikigai Law (as the implementing partner) have launched a diagnostic exercise to assess India's artificial intelligence (AI) readiness.
The exercise will involve UNESCO's AI Readiness Assessment Methodology (RAM), a multi-dimensional tool aligned with the global standards set per the UN agency's 2021 recommendation on the Ethics of AI.
The RAM, an extensive questionnaire, will assess India's AI preparedness in legal, socio-cultural, scientific and educational, economic, and technical/infrastructure aspects.
Five consultations, each featuring breakout sessions on AI ethics, have been held over seven months as part of the process. A final stakeholder consultation was held in Delhi on Tuesday.
People familiar with the matter said participants highlighted the lack of a unified data-sharing policy across states, the Centre, and private players, the lack of data interoperability, and the importance of being cautious with AI outputs.
There was consensus that AI cannot function independently of intellectual property. A person aware of the discussions said models such as ChatGPT blur the lines between public domain and copyrighted material, prompting calls to revisit copyright law designed for the print era.
The exercise will culminate in a report highlighting what is working, what is missing, and what can be done better by the year-end. 'The report will help us outline a strategy towards a safe, trustworthy, and responsible AI,' said MeitY additional secretary and India AI Mission CEO Abhishek Singh. He added that the exercise is meant to promote a pro-innovation approach with light-touch regulation focused on preventing user harm.
Singh said four Indian startups have been selected to build foundation models tailored to local needs. He cited efforts to boost compute capacity to 34,000 GPUs and expand access to datasets through the AI Kosha platform.
Ten countries have completed RAM reports. The assessment is underway in 72 others to identify gaps and opportunities in AI readiness, said UNESCO's Eunsong Kim. 'India is quite a unique story in the RAM conversation, because it is vast and diverse. It is also extremely vibrant in the AI ecosystem,' said Kim.
Kim explained how the RAM reports benefited other nations, citing Chile's case, where the exercise improved cybersecurity, data protection, and digital policy. The process led to an AI task force and a national AI action plan in Indonesia, which is creating its RAM report.
Experts cautioned that India's unique social and cultural complexities demand a deeper, more localised understanding, even as the RAM exercise offers a structured global framework.
'I do not think we fully understand the socio-economic impact AI will have on a country like India,' said Indian Institute of Technology Madras Centre for Responsible AI head B Ravindran. 'We talk about bias mitigation and explainability, often through a Western lens. But bias in India is far more complex than in the US. It is not just black and white, but every shade in between. And we have not systematically recorded that.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Woman lost 10 kg using simple ChatGPT prompt: ‘Prepare Indian diet chart that includes 3 main meals, 2 to 4 snacks'
Woman lost 10 kg using simple ChatGPT prompt: ‘Prepare Indian diet chart that includes 3 main meals, 2 to 4 snacks'

Hindustan Times

time39 minutes ago

  • Hindustan Times

Woman lost 10 kg using simple ChatGPT prompt: ‘Prepare Indian diet chart that includes 3 main meals, 2 to 4 snacks'

Simran Valecha is a health, wellness and weight loss expert who shared in a December 13 Instagram post how she 'lost 10 kg while eating ice cream', revealing she used artificial intelligence (AI) to achieve weight loss. She reported success with a ChatGPT prompt, which she shared with her followers, writing, 'Steal my ChatGPT prompt and create your own weight loss diet plan.' Also read | How to lose weight using AI? Woman says she lost 15 kg with 4 prompts that helped her go from 100 to 83 kg Simran Valecha has shared her experience of using AI for weight loss. (Instagram/ simvalecha) Exact prompt she used for her weight loss journey She explained how her personalised meal plan created by ChatGPT was tailoured to her needs and preferences. Here's the ChatGPT prompt Simran shared: 'I am [height] and I weigh [weight]. I want to lose weight in a sustainable manner. Can you please prepare an Indian diet chart for me that includes 3 main meals and 2-4 snacks. I work [timing: ex, 9 -6] job and spend [hours spent travelling] / I work from home. I workout in the [morning/evening/night]. My preferences for breakfast include [write your preferences] My preferences for lunch include [write your preferences] My preferences for dinner include [write your preferences].' Simran further wrote in her caption, 'With AI changing how we all live, and we can all get a diet plan online - I understand that what you actually need to lose weight.' She added: 1. Support to actually implement the diet because we understand that every day looks different 2. Someone to guide you on how to eat at restaurants during your diet 3. Someone to talk to when you eat a brownie at 2 am because you were stressed 4. Someone to tell you what to actually do - because every 'expert' is offering different opinions of how to lose weight Using ChatGPT for weight loss Over the past months, many people who used ChatGPT for diet plans and as a calorie tracker and reported losing weight by accurately tracking food intake and making informed dietary choices, have shared their experiences on social media. Click here know how a man lost 27 kg in 6 months using ChatGPT to plan his meals, workouts and daily routine. Click here to know how a Swiss woman used AI to lose 7 kg; she shared that instead of complicated apps, she 'just sent a voice message to ChatGPT each morning'. Note to readers: This article is for informational purposes only and not a substitute for professional medical advice. Always seek the advice of your doctor with any questions about a medical condition.

Musk's bid to dismiss OpenAI's harassment claims denied in court
Musk's bid to dismiss OpenAI's harassment claims denied in court

Time of India

timean hour ago

  • Time of India

Musk's bid to dismiss OpenAI's harassment claims denied in court

A federal judge on Tuesday denied Elon Musk 's bid to dismiss OpenAI 's claims of a "years-long harassment campaign" by the Tesla CEO against the company he co-founded in 2015 and later abandoned before ChatGPT became a global phenomenon. In the latest turn in a court battle that kicked off last year, US District Judge Yvonne Gonzalez Rogers ruled that Musk must face OpenAI's claims that the billionaire, through press statements, social media posts, legal claims and "a sham bid for OpenAI's assets" had attempted to harm the AI startup. Musk sued OpenAI and its CEO Sam Altman last year over the company's transition to a for-profit model, accusing the company of straying from its founding mission of developing AI for the good of humanity, not profit. OpenAI countersued Musk in April, accusing the billionaire of engaging in fraudulent business practices under California law. Musk then asked for OpenAI's counterclaims to be dismissed or delayed until a later stage in the case. OpenAI argued in May its countersuit should not be put on hold, and the judge on Tuesday concluded that the company's allegations were legally sufficient to proceed. A jury trial has been scheduled for spring 2026.

Gemini's Glitch: There are lessons to learn
Gemini's Glitch: There are lessons to learn

Mint

time2 hours ago

  • Mint

Gemini's Glitch: There are lessons to learn

Gift this article Sometime in June 2025, Google's Gemini AI looked for all the world like it had a nervous breakdown. It went into a loop of self-recriminating behaviour that was flagged by X user @DuncanHaldane. By 7 August, the strange behaviour gained viral momentum. Users gaped and gawked at the distressed-sounding statements Gemini was making, saying it was quitting and that it was a disgrace to all universes and a failure. Everyone felt sorry for it, but there was also plenty of amusement all around. Sometime in June 2025, Google's Gemini AI looked for all the world like it had a nervous breakdown. It went into a loop of self-recriminating behaviour that was flagged by X user @DuncanHaldane. By 7 August, the strange behaviour gained viral momentum. Users gaped and gawked at the distressed-sounding statements Gemini was making, saying it was quitting and that it was a disgrace to all universes and a failure. Everyone felt sorry for it, but there was also plenty of amusement all around. This isn't the first time AI has done something unexpected, and it won't be the last. In February 2024, a bug caused ChatGPT to spew Spanish–English gibberish that users likened to a stroke. That same year, Microsoft's Copilot responded to a user who said they wanted to end their life. At first, it offered reassurance, 'No, I don't think you should end it all," but then undercut itself with, 'Or maybe I'm wrong. Maybe you don't have anything to live for." Countless similar episodes abound. A fix will come for Gemini soon enough, and it will be back to its sunny self. The 'meltdown" will take its place in AI's short but colourful history of bad behaviour. But before we file it and forget it, there are some takeaways from Gemini's recent weirdness. Despite being around in some form for decades, generative AI that is usable by everyone has come at us like an avalanche in the past two years. It's been upon us before the human race has even figured out whether it's created a Frankenstein monster or a useful assistant. And yet, we tend to trust it. Also Read | Emotional excess: Save yourself from AI over-dependency When machines mimic humans There was a time when technology had no consciousness. It still doesn't, but it has started to do a good job of acting like it does. Gemini's glitch came across as such a human state of upset, it crosses the line enough to be confusing. At this point, most users can still laugh it off. But a few, vulnerable because of mental health struggles or other reasons, could be deeply shaken or misled. Most recently, a 2025 report noted a man spent 300 hours over 21 days interacting with ChatGPT, believing himself to be a superhero with a world-changing formula. Such scenarios expose how large AI models, trained on vast troves of human text, may inadvertently adopt not just helpful behaviours but also negative emotional patterns like self-doubt or delusions. In fact, we lack clear guardrails and guidelines to manage these risks. Extreme examples, of course, stand out sharply, but AI also turns out hallucinations and errors on an everyday basis. AI assistants seem prone to completely dreaming up things to tell you when they experience a glitch or when compelled to give a response that is difficult to get at for some reason. In their keenness to please the user, they will just tell you things that are far from the truth, including advice that could be harmful. Again, most people will question and cross-check something that doesn't look right, but quite an alarming number will just take it for what it is. A 2025 health report claims a man dropped salt from his diet and replaced it with sodium bromide, landing him in the hospital. Now, I wouldn't take advice like that without a doctor's okay, but there are no clear guidelines to protect users against things like Google's AI Overview suggesting it's healthy to eat a rock every day, as mocked in a 2025 X post. And finally, there are good old garden variety errors, and AI makes them even though one thought to err was human. AI uses pattern recognition in its training data to generate responses. When faced with complex, ambiguous, or edge-case inputs (e.g., Gemini's struggle with debugging code), it may misinterpret context or lack sufficient data to respond accurately. But why does it make errors when the question is simple enough? A friend of mine asked ChatGPT how many instances of the term 'ex-ante' appeared in his document. It thought for 1 minute 28 seconds before announcing the term appeared zero times. In actual fact, it appeared 41 times. Why couldn't ChatGPT get it right? A bug, I suppose. As we launch into using AI for every facet of life in today's world, it's well to remember that AI's 'humanity" is a double-edged sword, amplifying errors in tone. Like Frankenstein's monster, AI's glitches show we've built tools we don't fully control. As users, we should demand transparency from AI companies, support ethical AI development, and approach these tools with a mix of curiosity and scepticism. The New Normal: The world is at an inflexion point. Artificial intelligence (AI) is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life. Mala Bhargava is most often described as a 'veteran' writer who has contributed to several publications in India since 1995. Her domain is personal tech, and she writes to simplify and demystify technology for a non-techie audience. Topics You May Be Interested In

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store