logo
India AI: As DeepSeek and ChatGPT surge, is Delhi falling behind?

India AI: As DeepSeek and ChatGPT surge, is Delhi falling behind?

BBC News18-02-2025

Two years after ChatGPT took the world by storm, China's DeepSeek has sent ripples through the tech industry by collapsing the cost for developing generative artificial intelligence applications.But as the global race for AI supremacy heats up, India appears to have fallen behind, especially in creating its own foundational language model that's used to power things like chatbots.The government claims a homegrown equivalent to DeepSeek isn't far away. It is supplying startups, universities and researchers with thousands of high-end chips needed to develop it in under 10 months.A flurry of global AI leaders have also been talking up India's capabilities recently.
After being initially dismissive, OpenAI CEO Sam Altman this month said India should be playing a leading role in the AI revolution. The country is now OpenAI's second largest market by users. Others like Microsoft have put serious money on the table – committing $3bn (£2.4bn) for cloud and AI infrastructure. Nvidia's Jensen Huang also spoke of India's "unmatched" technical talent as a key to unlocking its future potential.With 200 startups working on generative AI, there's enough entrepreneurial activity under way too.But despite having key ingredients for success in place, India risks lagging behind without basic structural fixes to education, research and state policy, experts say.China and the US already have a "four to five year head-start", having invested heavily in research and academia and developed AI for military applications, law enforcement and now large language models, technology analyst Prasanto Roy told the BBC.Though in the top five globally on Stanford's AI Vibrancy Index – which ranks countries on metrics such as patents, funding, policy and research – India is still far behind the two superpowers in many key areas.China and the US were granted 60% and 20% of the world's total AI patents between 2010 and 2022 respectively. India got less than half a percent.India's AI startups also received a fraction of the private investment that US and Chinese companies got in 2023.India's state-funded AI mission, meanwhile, is worth a trifling $1bn compared with the staggering $500bn the US has earmarked for Stargate - a plan to build massive AI infrastructure in the US - or China's reported $137bn initiative to become an AI hub by 2030.
While DeepSeek's success has demonstrated that AI models can be built on older, less expensive chips - something India can take solace from - lack of "patient" or long-term capital from either industry or government is a major problem, says Jaspreet Bindra, founder of a consultancy that builds AI literacy in organisations."Despite what has been heard about DeepSeek developing a model with $5.6m, there was much more capital behind it."Lack of high-quality India-specific datasets required for training AI models in regional languages such as Hindi, Marathi or Tamil is another problem, especially given India's language diversity.But for all its issues, India punches far above its weight on talent – with 15% of the world's AI workers coming from the country.The issue though, as Stanford's AI talent migration research shows, is that more and more of them are choosing to leave the country.This is partly because "foundational AI innovations typically come from deep R&D in universities and corporate research labs", Mr Bindra says. And India lacks a supporting research environment, with few deep-tech breakthroughs emerging from its academic and corporate sectors.The enormous success of India's payments revolution was due to strong government-industry-academia collaboration - a similar model, he says, needs to be replicated for the AI push.The Unified Payment Interface (UPI), a digital payment system developed by a government organisation, has revolutionised digital payments in India, allowing millions to transact at the click of a button or by scanning a QR code.
Bengaluru's $200bn outsourcing industry, home to millions of coders, should have ideally been at the forefront of India's AI ambitions. But the IT companies have never really shifted their focus from cheap service-based work to developing foundational consumer AI technologies."It's a huge gap which they left to the startups to fill," says Mr Roy.He's unsure though whether startups and government missions can do this heavy lifting quickly enough, adding that the 10-month timeline set by the minster was a knee-jerk reaction to DeepSeek's sudden emergence."I don't think India will be able to produce anything like DeepSeek at least for the next few years," he adds. It is a view many others share.India can, however, continue to build and tweak applications upon existing open source platforms like DeepSeek "to leapfrog our own AI progress", Bhavish Agarwal, founder of one of India's earliest AI startups Krutrim, recently wrote on X.In the longer run though, developing a foundational model will be critical to have strategic autonomy in the sector and reduce import dependencies and threats of sanctions, say experts.India will also need to increase its computational power or hardware infrastructure to run such models, which means manufacturing semiconductors - something that's not taken off yet.Much of this will need to fall in place before the gap with the US and China is narrowed meaningfully.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

British AI start-up embroiled in sex assault claims after going bust
British AI start-up embroiled in sex assault claims after going bust

Telegraph

timean hour ago

  • Telegraph

British AI start-up embroiled in sex assault claims after going bust

The acrimonious collapse of one of Britain's most feted AI start-ups came after it faced allegations that a senior executive sexually assaulted a former member of staff, The Telegraph can reveal. The alleged assault took place at a hotel in India in 2022 and was reported to the police at the time. A board member alerted management to the claims, which were also the subject of a public petition to remove the executive. The Telegraph is aware of the identities of the alleged perpetrator and victim but cannot name them for legal reasons. The alleged assault was said to have taken place when the executive was on holiday and not on company business. The individual has since left the company. However, The Telegraph understands the incident remains the subject of ongoing legal proceedings in India against the police for alleged failure to investigate the complaint properly. A police filing reported in Indian media at the time, which did not name Builder AI, alleged the victim was forced to take a pill at 3am after meeting the executive in a hotel bar, and was later seriously assaulted in a hotel room. The alleged victim told her sister and friends, who urged her to take the incident to the police. It is understood the executive denied any wrongdoing. Builder AI commissioned an internal investigation into the matter, led by the company's general counsel. Sachin Dev Duggal, Builder AI's ousted chief and founder, is said to have recused himself from the investigation and received the final report. There is no suggestion of wrongdoing by Mr Duggal. Despite the investigation, Manpreet Ratia, Builder AI's new chief executive, this weekend insisted the full board and wider executive team at Builder AI had not been informed of the matter. Builder AI said: 'The matter was first raised with the company by an individual board member, acting in their personal capacity. 'Based on internal emails reviewed to date, it appears that Sachin Duggal recused himself from involvement in the matter. 'The company has been informed that a report was submitted by the general counsel to Sachin Duggal and the individual board member, concluding that no further action was warranted. This report was not shared with the broader board or the executive leadership. 'For avoidance of doubt, the full board of directors and the executive team were not made aware of this matter at the time it arose.' The statement added the wider board was first made aware of the claims after being contacted by The Telegraph, and that the investigation had not been overseen or directed by the board. The sexual assault claims at Builder AI, which have not previously been reported, come days after the business filed for bankruptcy in the US amid reports that inflated sales precipitated its dramatic collapse. Builder AI's lenders pulled support from the business earlier in May after forecast sales failed to materialise and came in far below expectations. Predicted sales of $220m (£163m) came in at closer to $50m. The company has been served with subpoenas by US prosecutors in New York, who have asked for information on its financial reporting and accounting, The Telegraph understands. Mr Duggal, Builder AI's founder and 'chief wizard', launched the business in 2016 with the aim of making building an app as 'easy as ordering a pizza'. The company offered what it called 'human-assisted AI', using a chatbot called Natasha and human contractors to quickly and cheaply build apps. The company raised $250m from investors, including Microsoft and venture capital investors Jungle Ventures and Insight Partners, who bought into Mr Duggal's vision. However, Mr Duggal was ousted in February after the board found that the company's revenues for 2024 were far lower than had been forecast. The events ultimately led lenders to pull $40m from the company's accounts in May, according to a letter to investors, forcing the UK-headquartered start-up to file for bankruptcy in the US. Last week, the Financial Times reported Builder AI was suspected by former employees to have engaged in multiple methods to boost revenues, including alleged circular transactions and deals with resellers that took years to pay up. A source familiar with the company's finances, however, denied this, and insisted its revenues were always reported properly, transparent and said that business with partners was legitimate. Builder AI collapsed with liabilities of up to $100m, bankruptcy filings show. According to the note to investors, the company owed more than $88m to cloud providers, such as Amazon. The start-up had been widely hailed as UK AI champion. Mr Duggal was named an EY World Entrepreneur of the Year in 2024. Its failure, and the sexual assault allegations, threaten to cast a shadow over Britain's AI ambitions.

Rotten Apple: are we finally watching the death of the iPhone?
Rotten Apple: are we finally watching the death of the iPhone?

Evening Standard

time4 hours ago

  • Evening Standard

Rotten Apple: are we finally watching the death of the iPhone?

While panic ripples through Apple, Sir Jony Ive — inventor of its flagship products from the iMac to the iPod, iPhone and Apple Watch — has risen up with an alternative answer elsewhere. Ive left Apple in 2019 to start his own design firm, LoveFrom, with the help of Laurene Powell Jobs, widow of Steve Jobs, who was an early investor. On May 21 came the announcement that OpenAI, the developer of ChatGPT, had acquired Ive's AI design start-up, io — in which Powell Jobs has also invested — in a deal worth $6.4 billion. News then broke that Ive and Sam Altman, the CEO of OpenAI, are working together to develop a new AI device, called a 'companion'.

Justice will come under threat from AI's ‘hallucinations'
Justice will come under threat from AI's ‘hallucinations'

The National

time5 hours ago

  • The National

Justice will come under threat from AI's ‘hallucinations'

Did you know that large language ­models like ChatGPT are in the habit of ­embedding random but superficially plausible falsehoods into the answers they generate? These are your hallucinations. Facts are made up. Counterfeit sources are invented. Real people are conflated with one another. Real-world sources are garbled. Quotations are falsified and attributed to authors who either don't exist, or didn't express any of the sentiments attributed to them. And troublingly, none of these errors are likely to be obvious to people relying on the pseudo-information produced, because it all looks so plausible and machine generated. We aren't helped in this by uncritical ­representations of AI as the ­sovereign ­remedy to all ills – from YouTube ­advertisers hawking easy solutions to ­struggling ­workers and firms, to ­governments ­trying to position themselves as modern and ­technologically nimble. READ MORE: Zia Yusuf returns to Reform UK in new 'Doge role' just two days after quitting Back in January, Keir Starmer announced that 'artificial intelligence will deliver a decade of national renewal', promising a plan that would 'mainline AI into the veins of this enterprising nation'. An interesting choice of metaphor, you might think, for a government which generally takes a dim view of the intravenous consumption of ­stupefying substances. Describing these failures as 'hallucinations' is not uncontested. Some folk think the language of hallucinations is too ­anthropomorphic, attributing features of human cognition and human ­consciousness to a predictive language process which we all need reminding doesn't actually reason or feel. The problem here isn't seeing fairies at the bottom of the garden, but faced with an unknown answer, making up facts to fill the void. One of the definitions of these systems failures I like best is 'a tendency to invent facts in moments of uncertainty'. This is why some argue 'bullshitting' much better captures what generative AI is actually doing. A liar knowingly tells you something that isn't true. A ­bullshitter, by contrast, preserves the ­illusion of ­themselves as a knowing and wise ­person by peddling whatever factoids they feel they need to get them through a ­potentially awkward encounter – ­reckless or ­indifferent to whether or not what they've said is true. Generative AI is a bullshitter. The knowledge it generates is meretricious. When using it, the mantra should not be 'trust but verify' – but 'mistrust and ­verify'. And given this healthy mistrust and time-consuming need for verification, you might wonder how much of a time-saver this unreliable Chatbot can really be. Higher education is still reeling from the impact. Up and down the country this month, lecturers have been grading papers, working their way through exam scripts and sitting in assessments boards, tracking our students' many ­achievements, but also contending with the impact of this wave of bullshit, as lazy, lost or ­desperate students decide to resort to ­generative AI to try to stumble through their assessments. If you think the function of ­education is achieving extrinsic goals – getting the ­essay submitted, securing a grade, ­winning the degree – then I guess AI-assisted progress to that end won't strike you as problematic. One of the profound pleasures of work in higher education is watching the evolution of your students. When many 18-year-olds arrive in law school for the first time, they almost always take a while to find their feet. The standards are ­different. The grading curve is sharper. We unaccountably teach young people almost nothing about law in Scottish schools, and new students' first encounter with the reality of legal reading, legal argument and legal sources often causes a bit of a shock to the system. But over four years, the development you see is often remarkable, with final-year students producing work which they could never have imagined was in them just a few teaching terms earlier. And that, for me, is the fundamental point. The work is in the students. Yes, it ­requires a critical synthesis with the world, ­engagement with other people's ideas, a breadth of reading and references – but strong students pull the project out of their own guts. READ MORE: UK won't recognise Palestine at UN conference despite 'discussions', reports say They can look at the final text and think, with significant and well-earned satisfaction – I made that. Now I know I'm ­capable of digesting a debate, ­marshalling an argument, presenting a mess of facts in a coherent and well-structured way – by myself, for myself. Education has changed me. It has allowed me to do things I couldn't imagine doing before. Folk turning in the AI-generated ­dissertations or essays, undetected, can only enjoy the satisfactions of time saved, getting away with it and the anxious ­future knowing that given the ­opportunity to honestly test themselves and show what they had in them, they ­decided instead to cheat. At university, being rumbled for ­reliance on AI normally results in a zero mark and a resit assessment, but the ­real-world impacts of these ­hallucinations are now accumulating in ways that should focus the mind, particularly in the legal sector. In London last week, the Court of Appeal handed down a stinging contempt of court judgment involving two cases of lawyers rumbled after citing bogus case law in separate court actions. The lawyers in question join hundreds of others from jurisdictions across the world, who've found their professional reputations shredded by being caught by the court after relying on hallucinated legal sources. We aren't talking about nickel and dime litigation either here. One of the two cases was a £89 million damages claim against the Qatar National Bank. The court found that the claimants cited 45 cases, 18 of which turned out to be invented, while quotations which had been relied on in their briefs were also phoney. The second case involved a very junior barrister who presented a judicial review petition, relying on a series of legal authorities which had the misfortune not to exist. As Dame Victoria Sharp points out, there are 'serious implications for the administration of justice and ­public ­confidence in the justice system if ­artificial intelligence is misused' in this way, precisely because of its ability to ­produce 'apparently coherent and ­plausible responses' which prove 'entirely incorrect', make 'confident assertions that are simply untrue', 'cite sources that do not exist' and 'purport to quote passages from a genuine source that do not appear in that source'. The Court of Appeal concluded that 'freely available generative artificial ­intelligence tools, trained on a large ­language model such as ChatGPT, are not capable of conducting reliable legal ­research'. I agree. For legal professionals to be ­presenting cases in this way is indefensible, with serious implications for professional standards integrity, for courts relying on the legal argument put before them and for clients who suffer the consequences of their case being presented using duff statements of the law or duff sources. I worry too about the potentially bigger impact these hallucinations will have on people forced to represent themselves in legal actions. Legal aid remains in crisis in this country. Many people who want to have the benefit of legal advice and representation find they cannot ­access it, particularly in civil matters. The saying goes that 'a man who represents himself in court has a fool for a client'. In modern Britain, a person who represents ­themselves in court normally has the only lawyer they can afford, as foolish and ­unfair as this might be. READ MORE: Freedom Flotilla urges UK Government to 'protect' ship from Israel as it nears Gaza Acting as a party litigant is no easy task. Legal procedures are often arcane and unfamiliar. Legal institutions can be intimidating. If the other side has the benefit of a solicitor or advocate, there's a real inequality of arms. But even before you step near a Sheriff Court, you need to have some understanding of the legal principles applying to your case to state it clearly. Misunderstand and ­mispresent the law, and you can easily lose a ­winnable case. In Scotland, in particular, significant parts of our law isn't publicly accessible or codified. This means ordinary people often can't find reliable and accessible online sources on what the law is – but it also means that LLMs like ChatGPT also haven't been able to crawl over these sources to inform the automated answers they spit out. This means that these large language models are much more likely to give ­questioning Scots answers based on ­English or sometimes even American law than the actual rules and principles a ­litigant in person needs to know to ­persuade the Sheriff that they have a good case. Hallucination rates are high. Justice will suffer.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store