
AI needs to be open and inclusive like India Stack
Back in October 2024, I wrote on these pages of a group of 12-year-olds who had figured out an ingenious shortcut to finish their homework. Use 40% ChatGPT, 40% Google, and 20% of their own words. At first, it looked like cheating. But with the perspective of distance, I think it was something else entirely. Regulators in Europe are telling Meta (which owns Facebook, Instagram and WhatsApp) not to use user data to train AI unless people clearly agree to it. (Shutterstock/ Representative photo)
These children had understood a basic truth: in today's world, what matters most is the result. Not the process. Not the effort. Ask the right question and let the machine find the answer. Don't worry too much about what happens in between. This way of thinking isn't limited to schoolwork anymore, it's showing up in the way digital systems are being built world over—India included.
Over the last few years, India has quietly built one of the most impressive pieces of technology. Between Aadhaar, UPI, DigiLocker, CoWIN, Bhashini, and ONDC– collectively called IndiaStack– it is now used by millions of people. It helps people prove their identity, send money, download documents, get vaccinated, translate languages, and access other public services.
But here's what makes India's system different from those in most other countries: it doesn't keep your data.
In countries like the United States or across Europe, tech companies track what people do online. Every search, every click, every purchase is saved and studied. That information is then used to target ads, recommend content, and even train artificial intelligence systems. That is why The New York Times is now suing OpenAI (the builders of ChatGPT) — because its news articles were used to train a system without permission.
This is why regulators in Europe are telling Meta (which owns Facebook, Instagram and WhatsApp) not to use user data to train AI unless people clearly agree to it.
In India, the rules—and the values—are different. The digital systems here were built with public money and designed to serve everyone. But they were not designed to spy on people. They were created to work quickly, fairly, and without remembering too much.
Take Aadhaar. All it is built to do is prove a person is who the person claims to be. It cannot track where you go.
Or DigiLocker. It doesn't keep copies of your CBSE marksheets, PAN cards, or insurance papers. It simply fetches these documents from the source when you ask. That's all. It's a messenger, not a filing cabinet.
UPI moves money between people. But it doesn't remember what you spent it on.
Long story short, these systems were built to function like light switches. They work when needed, and switch off when the job is done. The builders insisted it doesn't hold on to your personal information for longer than necessary.
That's why India's digital model is being noticed around the world. It's open, fair, and inclusive. But now, with the rise of artificial intelligence, a new kind of problem is emerging.
AI tools need a lot of data to learn how to speak, listen, write, or make decisions. In India, companies are beginning to train AI using public systems. Language tools learn from Bhashini. Health startups are using patterns from CoWIN to build diagnostic tools. Fintech firms are using transaction frameworks to refine how they give loans. This isn't illegal. It was even expected. These public systems were built to encourage innovation.
But here's the problem: the public helped create these systems, and now private companies are using them to build powerful new tools—and may be making money from them. Yet the public might not see any of that value coming back.
This is where the story of the 12-year-olds we started with, becomes relevant again.
Just like those students who used machines to do most of the work, there's now a larger system that is also skipping the middle part. People provide the inputs—documents, payments, identities. The machines learn from them. And then private players build services or products that benefit from all that learning. The people who made it possible? They are left out of the conversation.
In other countries, the debate is about privacy. In India, the debate must now shift to fairness.
It's not about stopping AI. It's not about banning companies from using public tools. It's about asking for transparency and accountability.
If a company is using data or tools from public systems to train its AI, it should say so clearly. If it benefits from public data, it should give something back—like sharing improved datasets, or allowing its models to be audited. If it's building a commercial product on public infrastructure, it should explain how it used that infrastructure in the first place.
This is not regulation for the sake of it. It's basic respect for the public that made the system possible in the first place.
India's digital platforms were built to serve everyone. They were not designed to store people's information, and that's what makes them special. But that openness can be misused if those who build on top of it forget where their foundations came from.
It's easy to be dazzled by AI. But intelligence—human or machine—shouldn't come without responsibility.
So here's the question worth asking: If the public built the digital tools, used them, trusted them, and helped them grow—why aren't they part of the rewards that artificial intelligence is now creating?

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NDTV
33 minutes ago
- NDTV
Can A Foreign Government Hack WhatsApp? A Cybersecurity Expert Explains How That Might Work
Iranian officials on wednesday, urged the country's citizens to remove the messaging platform WhatsApp from their smartphones. Without providing any supporting evidence, they alleged the app gathers user information to send to Israel. WhatsApp has rejected the allegations. In a statement to Associated Press, the Meta-owned messaging platform said it was concerned 'these false reports will be an excuse for our services to be blocked at a time when people need them most'. It added that it does not track users' location nor the personal messages people are sending one another. It is impossible to independently assess the allegations, given Iran provided no publicly accessible supporting evidence. But we do know that even though WhatsApp has strong privacy and security features, it isn't impenetrable. And there is at least one country that has previously been able to penetrate it: Israel. 3 Billion Users WhatsApp is a free messaging app owned by Meta. With around 3 billion users worldwide and growing fast, it can send text messages, calls and media over the internet. It uses strong end-to-end encryption meaning only the sender and recipient can read messages; not even WhatsApp can access their content. This ensures strong privacy and security. Advanced Cyber Capability The United States is the world leader in cyber capability. This term describes the skills, technologies and resources that enable nations to defend, attack, or exploit digital systems and networks as a powerful instrument of national power. But Israel also has advanced cyber capability, ranking alongside the United Kingdom, China, Russia, France and Canada. Israel has a documented history of conducting sophisticated cyber operations. This includes the widely cited Stuxnet attack that targeted Iran's nuclear program more than 15 years ago. Israeli cyber units, such as Unit 8200, are renowned for their technical expertise and innovation in both offensive and defensive operations. Seven of the top 10 global cybersecurity firms maintain R&D centers in Israel, and Israeli startups frequently lead in developing novel offensive and defensive cyber tools. A Historical Precedent Israeli firms have repeatedly been linked to hacking WhatsApp accounts, most notably through the Pegasus spyware developed by Israeli-based cyber intelligence company NSO Group. In 2019, it exploited WhatsApp vulnerabilities to compromise 1,400 users, including journalists, activists and politicians. Last month, a US federal court ordered the NSO Group to pay WhatsApp and Meta nearly US$170 million in damages for the hack. Another Israeli company, Paragon Solutions, also recently targeted nearly 100 WhatsApp accounts. The company used advanced spyware to access private communications after they had been de-encrypted. These kinds of attacks often use ' spearphishing '. This is distinct from regular phishing attacks, which generally involve an attacker sending malicious links to thousands of people. Instead, spearphishing involves sending targeted, deceptive messages or files to trick specific individuals into installing spyware. This grants attackers full access to their devices – including de-encrypted WhatsApp messages. A spearphishing email might appear to come from a trusted colleague or organisation. It might ask the recipient to urgently review a document or reset a password, leading them to a fake login page or triggering a malware download. Protecting Yourself From 'Spearphishing' To avoid spearphishing, people should scrutinise unexpected emails or messages, especially those conveying a sense of urgency, and never click suspicious links or download unknown attachments. Hovering the mouse cursor over a link will reveal the name of the destination. Suspicious links are those with strange domain names and garbled text that has nothing to do with the purported sender. Simply hovering without clicking is not dangerous. Enable two-factor authentication, keep your software updated, and verify requests coming through trusted channels. Regular cybersecurity training also helps users spot and resist these targeted attacks. Editor's note: following publication of this article, Meta reached out to The Conversation to inform us the NSO and Paragon attacks were zero-click attacks, which did not involve any interaction from their target. (Author: , Senior Lecturer in Applied Ethics & CyberSecurity, Griffith University) (Disclosure Statement: David Tuffley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.)


Time of India
44 minutes ago
- Time of India
Why BBC has 'threatened' AI company led by Desi CEO
The BBC has threatened legal action against US AI search engine Perplexity in an effort to block the company from using its vast content archives to train AI models. The British national broadcaster alleges that Perplexity's AI model was trained on BBC content without permission. In a letter to Perplexity CEO Aravind Srinivas , seen by the Financial Times, the BBC stated it has evidence that the San Francisco-based startup's "default AI model" was "trained using BBC content." The letter outlines a clear demand: the BBC could seek an injunction unless Perplexity immediately ceases scraping all BBC content, deletes any copies of the broadcaster's material used for AI development, and provides "a proposal for financial compensation" for the alleged intellectual property infringement. This move marks the first time the publicly funded British broadcaster has directly challenged an AI company on this issue. Perplexity dismisses claims, says BBC wants to "to preserve Google's illegal monopoly" The Google rival, however, has rejected the BBC's allegations, calling the claims as "manipulative and opportunistic," asserting that the BBC has "a fundamental misunderstanding of technology, the internet and intellectual property law." Perplexity further contended that the BBC's actions were an attempt "to preserve Google's illegal monopoly for its own self-interest." by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like USDJPY đang đi lên không? IC Markets Đăng ký It's important to note that Perplexity itself does not build or train foundational AI models. Instead, it provides an interface allowing users to choose between models developed by other major players, including OpenAI, Google and Anthropic. A source close to Perplexity indicated that its in-house model is derived from Meta's Llama and is refined to enhance accuracy and reduce "hallucinations," where AI models generate false information. World Music Day 2025: Tech That Changed How We Listen to Music


Time of India
44 minutes ago
- Time of India
ChatGPT might be making you think less: MIT study raises ‘red flags' about AI dependency
ChatGPT might be making you think less: MIT study raises 'red flags' about AI dependency As AI tools become part of our daily routines, a question is starting to bubble up: What happens when we rely on them too much? A new study from MIT's Media Lab takes a closer look at how tools like ChatGPT may be affecting our brains. And what the researchers found is worth paying attention to. The study focused on how people engage mentally when completing tasks with and without AI. It turns out that while ChatGPT can make writing easier, it may also be reducing how much we think. According to the research team, participants who used ChatGPT showed noticeably lower brain activity than those who did the same task using Google or no tech at all. The findings suggest that depending on AI for tasks that require effort, like writing, decision-making, or creative thinking, could weaken the very mental muscles we're trying to sharpen. ChatGPT users show lowest brain activity in MIT's groundbreaking study The experiment involved 54 participants between the ages of 18 and 39. They were split into three groups and asked to write essays in response to prompts similar to those on standardised tests. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Buy Brass Idols - Handmade Brass Statues for Home & Gifting Luxeartisanship Buy Now Undo Group 1 used ChatGPT to generate their answers Group 2 relied on Google Search to find and compile information Group 3 worked without any tools, using only their knowledge and reasoning While they worked, each participant wore a headset that tracked electrical activity across 32 areas of the brain. The aim was to see how engaged their minds were during the process. (The research was led by Dr. Nataliya Kosmyna along with a team that included Ashly Vivian Beresnitzky, Ye Tong Yuan, Jessica Situ, Eugene Hauptmann, Xian-Hao Liao, Iris Braunstein, and Pattie Maes.) ChatGPT may be hurting your creativity, MIT researchers warn The results were pretty clear: the group that used ChatGPT showed the lowest brain activity of all three groups. In particular, areas linked to memory, creativity, and concentration were significantly less active. In contrast, those who wrote without help from AI showed the highest mental engagement. They had to organise their thoughts, build arguments, and recall information, all things that activated the brain more deeply. Even the group using Google Search showed more engagement than the AI group, possibly because the process of looking for and evaluating information keeps the brain involved. There was another telling detail. Many in the ChatGPT group simply pasted the prompts into the tool and copied the output with little to no editing. Teachers who reviewed their essays said they felt impersonal, calling them 'soulless.' Dr. Kosmyna put it bluntly: 'They weren't thinking. They were just typing.' AI dependency Short-term efficiency, long-term cost Later in the study, researchers asked participants to rewrite one of their essays, this time without using any tools. The ChatGPT users struggled. Many couldn't remember their original arguments or structure. Since they hadn't processed the material deeply the first time, it hadn't stuck. Kosmyna described this as a red flag: 'It was efficient. But nothing was integrated into their brains.' That raises a broader concern: if AI is doing the heavy lifting, are we still learning? Or are we just moving text around while our cognitive skills fade in the background? The growing concern among psychiatrists and educators Dr. Zishan Khan, a psychiatrist who works with students, says he's already seeing signs of AI overuse in younger people. 'The neural pathways responsible for thinking, remembering, and adapting—they're weakening,' he explained. The fear is that early and frequent reliance on tools like ChatGPT might lead to long-term cognitive decline, especially in developing brains. MIT's team is now expanding their research to see how AI affects people in other fields. They've already started looking at coders who use tools like GitHub Copilot. So far, Kosmyna says the early results there are 'even worse' in terms of mental engagement. A word of warning for classrooms and beyond Interestingly, the MIT researchers shared their findings before going through the full peer review process, something that's uncommon in academic research. But Kosmyna felt the potential impact was urgent enough to make an exception. 'I'm really concerned someone might say, 'Let's introduce ChatGPT into kindergarten classrooms,'' she said. 'That would be a terrible mistake. Young brains are especially vulnerable.' To prove just how easy it is to lose the depth of complex research, the team did something clever: they planted subtle factual 'traps' in the study. When readers ran the paper through ChatGPT to summarise it, many versions came back with key errors, including details the researchers never even included. What does this mean for the future of AI use Not at all. The tool isn't the enemy. It can be incredibly helpful, especially when used wisely. But this study reminds us that how we use AI matters just as much as whether we use it. Here are a few takeaways from the researchers: Use AI as a partner, not a replacement. Let it offer ideas, but make sure you're still doing the core thinking. Stay actively involved. Skipping the process of learning or writing just to get a result means you're not absorbing anything. Be cautious in education. Children need to build foundational skills before leaning on technology. Also read | Nvidia CEO Jensen Huang swears by these 6 effective management strategies to run a company like a genius AI Masterclass for Students. Upskill Young Ones Today!– Join Now