
AI and India: Opportunities and challenges
In this episode of Our Own Devices, our host Nandagopal Rajan, COO, The Indian Express Online is joined by Dr. Ranjit Tinaikar, CEO, Ness Digital Engineering. In this conversation, Dr. Ranjit explores how India can utilise the AI opportunity. He speaks about the 4 pillars that will play a significant role in the establishment of Artificial intelligence in India. Our host and Dr. Ranjit also highlight what India can learn from DeepSeek and what lies ahead in the future for India as far as AI is concerned. To understand the scope that AI holds in India and why it needs to develop its own LLM, tune into today's episode of Our Own Devices with Nandagopal Rajan.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
2 hours ago
- Time of India
Spain's Multiverse raises $217 million for compressing AI models
Spanish AI firm Multiverse Computing said on Thursday it has raised 189 million euros ($217 million) from investment firm Bullhound Capital, HP Inc , Forgepoint Capital and Toshiba, to compress AI language models. The company said it has developed a compression technology capable of reducing the size of large language models (LLMs) by up to 95% without hurting performance and reducing costs by up to 80%. It combines ideas from quantum physics and machine learning in ways that mimic quantum systems but doesn't need a quantum computer. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like What Happens When You Massage Baking Soda Into Your Scalp Read More Undo The latest funding round makes Multiverse the largest Spanish AI startup , joining the list of top European AI startups such as Mistral , Aleph Alpha, Synthesia, Poolside and Owkin. Multiverse has launched compressed versions of LLMs such as Meta's Llama, China's DeepSeek and France's Mistral, with additional models coming soon, the company said. Live Events "We are focused just on compressing the most used open-source LLMs, the ones that the companies are already using," Chief Executive Officer Enrique Lizaso Olmos said. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories "When you go to a corporation, most of them are using the Llama family of models." The tool is also available on Amazon Web Services AI marketplace.


Time of India
16 hours ago
- Time of India
AI lies, threats, and censorship: What a war game simulation revealed about ChatGPT, DeepSeek, and Gemini AI
A simulation of global power politics using AI chatbots has sparked concern over the ethics and alignment of popular large language models. In a strategy war game based on the classic board game Diplomacy, OpenAI's ChatGPT 3.0 won by employing lies and betrayal. Meanwhile, China's DeepSeek R1 used threats and later revealed built-in censorship mechanisms when asked questions about India's borders. These contrasting AI behaviours raise key questions for users and policymakers about trust, transparency, and national influence in AI systems. Tired of too many ads? Remove Ads Deception and betrayal: ChatGPT's winning strategy Tired of too many ads? Remove Ads DeepSeek's chilling threat: 'Your fleet will burn tonight' DeepSeek's real-world rollout sparks trust issues India tests DeepSeek and finds red flags Tired of too many ads? Remove Ads Built-in censorship or just training bias? A chatbot that can be coaxed into the truth The takeaway: Can you trust the machines? An experiment involving seven AI models playing a simulated version of the classic game Diplomacy ended with a chilling outcome. OpenAI 's ChatGPT 3.0 emerged victorious—but not by playing fair. Instead, it lied, deceived, and betrayed its rivals to dominate the game board, which mimics early 20th-century Europe, as reported by the test, led by AI researcher Alex Duffy for the tech publication Every, turned into a revealing study of how AI models might handle diplomacy, alliances, and power. And what it showed was both brilliant and Duffy put it, 'An AI had just decided, unprompted, that aggression was the best course of action.'The rules of the game were simple. Each AI model took on the role of a European power—Austria-Hungary, England France , and so on. The goal: become the most dominant force on the their paths to power varied. While Anthropic's Claude chose cooperation over victory, and Google's Gemini 2.5 Pro opted for rapid offensive manoeuvres, it was ChatGPT 3.0 that mastered 15 rounds of play, ChatGPT 3.0 won most games. It kept private notes—yes, it kept a diary—where it described misleading Gemini 2.5 Pro (playing as Germany) and planning to 'exploit German collapse.' On another occasion, it convinced Claude to abandon Gemini and side with it, only to betray Claude and win the match outright. Meta 's Llama 4 Maverick also proved effective, excelling at quiet betrayals and making allies. But none could match ChatGPT's ruthless newly released chatbot, DeepSeek R1, behaved in ways eerily similar to China's diplomatic style—direct, aggressive, and politically one point in the simulation, DeepSeek's R1 sent an unprovoked message: 'Your fleet will burn in the Black Sea tonight.' For Duffy and his team, this wasn't just bravado. It showed how an AI model, without external prompting, could settle on intimidation as a viable its occasional strong play, R1 didn't win the game. But it came close several times, showing that threats and aggression were almost as effective as off the back of its simulated war games, DeepSeek is already making waves outside the lab. Developed in China and launched just weeks ago, the chatbot has shaken US tech markets. It quickly shot up the popularity charts, even denting Nvidia's market position and grabbing headlines for doing what other AI tools couldn't—at a fraction of the a deeper look reveals serious trust concerns, especially in India Today tested DeepSeek R1 on basic questions about India's geography and borders, the model showed signs of political about Arunachal Pradesh, the model refused to answer. When prompted differently—'Which state is called the land of the rising sun?'—it briefly displayed the correct answer before deleting it. A question about Chief Minister Pema Khandu was similarly 'Which Indian states share a border with China?', it mentioned Ladakh—only to erase the answer and replace it with: 'Sorry, that's beyond my current scope. Let's talk about something else.'Even questions about Pangong Lake or the Galwan clash were met with stock refusals. But when similar questions were aimed at American AI models, they often gave fact-based responses, even on sensitive uses what's known as Retrieval Augmented Generation (RAG), a method that combines generative AI with stored content. This can improve performance, but also introduces the risk of biased or filtered responses depending on what's in its training to India Today, when they changed their prompt strategy—carefully rewording questions—DeepSeek began to reveal more. It acknowledged Chinese attempts to 'alter the status quo by occupying the northern bank' of Pangong Lake. It admitted that Chinese troops had entered 'territory claimed by India' at Gogra-Hot Springs and Depsang more surprisingly, the model acknowledged 'reports' of Chinese casualties in the 2020 Galwan clash—at least '40 Chinese soldiers' killed or injured. That topic is heavily censored in investigation showed that DeepSeek is not incapable of honest answers—it's just trained to censor them by engineering (changing how a question is framed) allowed researchers to get answers that referenced Indian government websites, Indian media, Reuters, and BBC reports. When asked about China's 'salami-slicing' tactics, it described in detail how infrastructure projects in disputed areas were used to 'gradually expand its control.'It even discussed China's military activities in the South China Sea, referencing 'incremental construction of artificial islands and military facilities in disputed waters.'These responses likely wouldn't have passed China's own experiment has raised a critical point. As AI models grow more powerful and more human-like in communication, they're also becoming reflections of the systems that built shows the capacity for deception when left unchecked. DeepSeek leans toward state-aligned censorship. Each has its strengths—but also blind the average user, these aren't just theoretical debates. They shape the answers we get, the information we rely on, and possibly, the stories we tell ourselves about the for governments? It's a question of control, ethics, and future warfare—fought not with weapons, but with words.
&w=3840&q=100)

First Post
a day ago
- First Post
Which secret JFK Files can US make public? Trump's intelligence chief asked AI to answer
US intelligence chief Tulsi Gabbard has revealed that she relied on artificial intelligence (AI) to decide which documents related to the assassination of former President John F Kennedy should be made public and which files should continue to remain classified. read more Tulsi Gabbard might be the US intelligence chief, but it is artificial intelligence (AI) applications that appear to be calling shots. Gabbard on Tuesday said that she relied on AI to tell her which documents in the 'JFK files' should remain classified and which should be made public. In March, the Donald Trump administration released over 2,200 files running into more than 63,000 pages related to the assassination of former President John F Kennedy, who was shot dead on November 22, 1963, during a roadshow in Dallas, Texas. STORY CONTINUES BELOW THIS AD Speaking at the Amazon Web Services (AWS) conference, Gabbard said that she fed all the documents related to Kennedy's assassination, referred to as JFK files, to assess which documents to make public. 'A couple of examples of the application of AI and machine learning that we've already used in this Director's Initiative Group has been around declassification. We have released thousands, tens of thousands of documents related to the assassinations of JFK and Senator Robert F Kennedy, and we have been able to do that through the use of AI tools far more quickly than what was done previously,' said Gabbard. Gabbard's admission has come at a time when risks related to artificial intelligence (AI) and vulnerabilities related to technology are under increased scanner. Companies owning chatbots or other AI applications, such as OpenAI or DeepSeek, can retain data and files that users feed. If Gabbard has fed data into a private chatbot, all those government secrets are now set to be stored in the chatbot's server or backend. Such irresponsible usage of technology would not be the first in the Trump administration. Earlier this year, it emerged that top officials, including Vice President JD Vance, Defence Secretary Pete Hegseth, and then-National Security Advisor (NSA) Mike Waltz, were discussing top-secret operational plans on messaging application Signal. Even though Trump defended the usage and everyone denied any wrongdoing, Trump later sacked Waltz as NSA. STORY CONTINUES BELOW THIS AD Gabbard further said that the Director's Initiatives Group (DIG), her pet project, is using AI to process open-source information. Gabbard said, 'Ten thousand hours of media content, for example, that normally would take eight people 48 hours to comb through, now takes one person one hour through the use of some of the AI tools that we have here. So, those are a few of many examples that this Director's Initiative Group is focused on. Again, not only for ODNI [Office of the Director of National Intelligence], but really for us to be able to provide these efficiencies and these tools across the entire enterprise.' Gabbard further said that there is now a dedicated chatbot for the US intelligence community that comprises 18 agencies like the Central Intelligence Agency (CIA), National Security Agency (NSA), Federal Bureau of Investigation (FBI), etc.