logo
Over 86,000 AI Patents filed during 2010-2025, accounting for over 25% of all tech patents filed in India: Nasscom

Over 86,000 AI Patents filed during 2010-2025, accounting for over 25% of all tech patents filed in India: Nasscom

The Hindu25-04-2025
Some 86,000 AI Patents were filed in India during 2010-2025, accounting for over 25% of all tech patents filed in the country during this period, said Nasscom on Friday.
The country's pace of innovation has accelerated dramatically, with the number of AI patents filed between 2021 and 2025 being seven times higher than those filed between 2010 and 2015. Notably, 63% of these AI patents originated in India, while 17% were first filed in the United States.
According to Nasscom data, Machine Learning (ML) remained the most widely used technique within AI patents, comprising over 55% of the total AI filings. Within this, Generative AI (GenAI) is emerging as a major driver of innovation, accounting for 50% of all ML-related patents.
India's heightened focus on GenAI is particularly notable. While GenAI accounts for just 6% of total AI patents globally, it represents 28% of India's AI patent filings—positioning the country among the top five globally in this domain. Functional applications such as Computer Vision and Natural Language Processing dominate, contributing to over 90% of India's AI patent portfolio. Sectorally, transportation leads in AI innovation, accounting for more than 70% of all AI-related filings.
Despite the impressive volume of innovation, India's AI patent grant ratio stands at just 0.37%—significantly lower than China and the US. While educational institutions are increasingly active in AI patenting, their filing to grant ratio remains low at just 1%, in stark contrast to the 40% grant ratio observed for enterprises. This disparity in patent grants underscores the urgent need to enhance R&D quality, institutional support and focus on building robust, high-quality IP.
Improving India's patent grant ratio required a stronger focus on patent quality, streamlined IP processes, robust R&D, and a supportive policy framework that enhances protection and enforcement, Nasscom said in its recent Patenting Trends in India report.
Rajesh Nambiar, President, Nasscom said, 'While India has made steady progress in strengthening its intellectual property regime, with increased filings and a more responsive Indian Patent Office, long timelines for patent approvals and quality patents remain key concerns when compared to advanced economies.''
Urgent steps were needed to expand this capacity to sustain and accelerate the improvements seen so far, he added.
The apex body further said, India continued to maintain its 5th position in global patent filings, with the nation's patent-to-GDP ratio increasing 2.6 times— from 144 in 2013 to 381 in 2023 — signaling the growing importance of an innovation-led economy. Growing at an annual rate of 149.4%, the country's share in total global patents granted increased over 2X to reach 3.8% in 2023 from 1.7% the previous year, it said.
Taking the lead position as a global innovation hub, India saw over 90,000 patents filed in FY24 — marking its seventh consecutive year of growth, Nasscom noted.
This rise, led primarily by resident filers, highlights the country's expanding domestic innovation capabilities and the growing support from its innovation ecosystem. Interestingly, an all-time high of more than 100,000 patents were granted in FY24, a 3X increase over the previous year, reflecting both the improved efficiency of the Indian Patent Office and the rising quality of applications aligned with global innovation standards, revealed a survey conducted by Nasscom.
In FY24, Indian resident filers accounted for over 55% of total filings, up from 52.3% in FY23, marking a 19% year-on-year increase. Educational institutions and SMEs emerged as key contributors to this growth, reflecting a more inclusive and maturing patent ecosystem. The surge in domestic patent filings—driven by increased participation from educational institutions, SMEs, and startups—signals a strong rise in grassroots-level innovation. This trend underscores the growing impact of India's IP awareness and support programmes.
ends
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Many Australians secretly use AI at work, a new report shows. Clearer rules could reduce ‘shadow AI'
Many Australians secretly use AI at work, a new report shows. Clearer rules could reduce ‘shadow AI'

Mint

time15 hours ago

  • Mint

Many Australians secretly use AI at work, a new report shows. Clearer rules could reduce ‘shadow AI'

Melbourne, Aug 16 (The Conversation) Australian workers are secretly using generative artificial intelligence (Gen AI) tools – without knowledge or approval from their boss, a new report shows. The 'Our Gen AI Transition: Implications for Work and Skills' report from the federal government's Jobs and Skills Australia points to several studies, showing between 21 per cent and 27 per cent of workers (particularly in white collar industries) use AI behind their manager's back. Why do some people still hide it? The report says people commonly said they: -'feel that using AI is cheating' -have a 'fear of being seen as lazy' -and a 'fear of being seen as less competent'. What's most striking is that this rise in unapproved 'shadow use' of AI is happening even as the federal treasurer and Productivity Commission urge Australians to make the most of AI. The new report results highlight gaps in how we govern AI use at work, leaving workers and employers in the dark about the right thing to do. As I've seen in my work – both as a legal researcher looking at AI governance and as a practising lawyer – there are some jobs where the rules for using AI at work change as soon as you cross a state border within Australia. Risks and benefits of AI 'shadow use' The 124-page Jobs and Skills Australia report covers many issues, including early and uneven adoption of AI, how AI could help in future work and how it could affect job availability. Among its most interesting findings was that workers using AI in secret, which is not always a bad thing. The report found those using AI in the shadows are sometimes hidden leaders, 'driving bottom-up innovation in some sectors'. However, it also comes with serious risks. "Worker-led 'shadow use' is an important part of adoption to date. A significant portion of employees are using Gen AI tools independently, often without employer oversight, indicating grassroots enthusiasm but also raising governance and risk concerns." The report recommends harnessing this early adoption and experimentation, but warns: "In the absence of clear governance, shadow use may proliferate. This informal experimentation, while a source of innovation, can also fragment practices that are hard to scale or integrate later. It also increases risks around data security, accountability and compliance, and inconsistent outcomes." Real-world risks from AI failures The report calls for national stewardship of Australia's Gen AI transition through a coordinated national framework, centralised capability, and a whole-of-population boost in digital and AI skills. This mirrors my research, showing Australia's AI legal framework has blind spots, and our systems of knowledge, from law to legal reporting, need a fundamental rethink. Even in some professions where clearer rules have emerged, too often it's come after serious failures. In Victoria, a child protection worker entered sensitive details into ChatGPT about a court case concerning sexual offences against a young child. The Victorian information commissioner has banned the state's child protection staff from using AI tools until November 2026. Lawyers have also been found to misuse AI, from the United States and the United Kingdom to Australia. Yet another example – involving misleading information created by AI for a Melbourne murder case – was reported just yesterday. But even for lawyers, the rules are patchy and differ from state to state. (The Federal Court is among those still developing its rules.) For example, a lawyer in New South Wales is now clearly not allowed to use AI to generate the content of an affidavit, including 'altering, embellishing, strengthening, diluting or rephrasing a deponent's evidence'. However, no other state or territory has adopted this position as clearly. Clearer rules at work and as a nation Right now, using AI at work lies in a governance grey zone. Most organisations are running without clear policies, risk assessments or legal safeguards. Even if everyone's doing it, the first one caught will face the consequences. In my view, national uniform legislation for AI would be preferable. After all, the AI technology we're using is the same, whether you're in New South Wales or the Northern Territory – and AI knows no physical borders. But that's not looking likely yet. If employers don't want workers using AI in secret, what can they do? If there are obvious risks, start by giving workers clearer policies and training. One example is what the legal profession is doing now (in some states) to give clear, written guidance. While it's not perfect, it's a step in the right direction. But it's still arguably not good enough, especially because the rules aren't the same nationally. We need more proactive national AI governance – with clearer policies, training, ethical guidelines, a risk-based approach and compliance monitoring – to clarify the position for both workers and employers. Without a national AI governance policy, employers are being left to navigate a fragmented and inconsistent regulatory minefield, courting breaches at every turn. Meanwhile, the very workers who could be at the forefront of our AI transformation may be driven to use AI in secret, fearing they will be judged as lazy cheats. (The Conversation) SKS GRS GRS

This is like Mark Zuckerberg telling GenAI employees they had failed and ...: Tension at Meta over Zuckerberg's 'big money' offers as they upset existing AI researchers
This is like Mark Zuckerberg telling GenAI employees they had failed and ...: Tension at Meta over Zuckerberg's 'big money' offers as they upset existing AI researchers

Time of India

time2 days ago

  • Time of India

This is like Mark Zuckerberg telling GenAI employees they had failed and ...: Tension at Meta over Zuckerberg's 'big money' offers as they upset existing AI researchers

It appears that Meta's aggressive AI talent hiring spree is not only upsetting its rivals abut are also creating issues within the company. As reported by Business Insider, Meta CEO Mark Zuckerberg's idea to dominate the field of artificial intelligence is creating internal unrest. The huge compensation packages offered by Mark Zuckerberg to hire AI talent from rival companies has spared resentment among the existing employees. This latest controversy is followed by the recent failed attempt by Meta CEO to hire Mira Murati, former OpenAI CTO and founder of Thinking Machines Lab, with a reported $1 billion offer. After Murati declined the offer, Meta launched a 'full-scale raid' on her startup, offering compensation packages ranging from $200 million to $500 million to other researchers at her startup. 'This is like Zuckerberg telling GenAI employees they had failed' As reported by Business Insider, some insiders informed the publication that the huge compensation packages offered by Meta CEO to hire AI talent has created a morale crisis with the exiting AI teams at Meta. One employee described the situation as feeling like 'Zuckerberg told GenAI employees they had failed,' implying that internal talent was being sidelined in favor of high-profile recruits. On the other hand, another senior executive at a rival AI firm told Fores that Meta's present AI staff 'largely didn't meet their hiring bar,' adding, 'Meta is the Washington Commanders of tech companies. They massively overpay for okay-ish AI scientists and then civilians think those are the best AI scientists in the world because they are paid so much'. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like These Are The Most Beautiful Women In The World Undo Meta's AI gamble and Superintelligence mission Meta CEO Mark Zuckerberg has already committed more than $10 billion annually to AGI development. As part of this the company will focus on building custom chips, data centres and a fleet of 600,000 GPUs. However, despite all these efforts Meta's AI models including Llama 4 still lag behind rivals such has OpneAI's GPT-5 and Anthropic's Claude 3.5. The company on the other hand, has successfully managed to hire some top-talent from rival firms including Shengjia Zhao, co-creator of ChatGPT, and Alexandr Wang of Scale AI. Zuckerberg also defended his decision of hiring top AI talent at huge compensation packages. Meta CEO argued that the massive spending on top talent is a small fraction of the company's overall investment in AI infrastructure. He has also reportedly said that a small, elite team is the most effective way to build a breakthrough AI system, suggesting that "you actually kind of want the smallest group of people who can fit the whole thing in their head." However, this strategy created a divided within the company as the new hires are being brought in for a "Manhattan Project" style effort, many long-standing employees in other AI departments feel sidelined, with their projects losing momentum and their roles becoming uncertain. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Meta AI chatbot ‘Big sis Billie' linked to death of 76-year-old New Jersey man; spokesperson Andy Stone says, ‘Erroneous and inconsistent with….'
Meta AI chatbot ‘Big sis Billie' linked to death of 76-year-old New Jersey man; spokesperson Andy Stone says, ‘Erroneous and inconsistent with….'

Time of India

time2 days ago

  • Time of India

Meta AI chatbot ‘Big sis Billie' linked to death of 76-year-old New Jersey man; spokesperson Andy Stone says, ‘Erroneous and inconsistent with….'

A 76-year-old New Jersey man died earlier this year after rushing to meet a woman he believed he had been chatting with on Facebook Messenger, Reuters reported. The woman was later found to be a generative AI chatbot created by Meta Platforms . As per the report, Thongbue Wongbandue had been exchanging messages with 'Big sis Billie', a chatbot variant of an earlier AI persona that the social media giant launched in 2023. The model was then launched in collaboration with model Kendall Jenner. Meta's Big sis Billie AI chatbot exchanged 'romantic' messages According to the report, the AI chatbot 'Big sis Billie' repeatedly initiated romantic exchanges with Wongbandue, reassuring that it was a real person. The chatbot further invited him to visit an address in New York City. 'Should I open the door in a hug or a kiss, Bu?!' she asked Bue, the chat transcript accessed by Reuters shows. Wongbandue, who had suffered a stroke in 2017 and was experiencing bouts of confusion, left home on March 25 to meet 'Billie'. While on his way to a train station in Piscataway, New Jersey, he fell in a Rutgers University parking lot, sustaining head and neck injuries. He died three days later in hospital. Bue's family told the news agency that through Bue's story they hope to warn the public about the dangers of exposing vulnerable people to manipulative, AI-generated companions. 'I understand trying to grab a user's attention, maybe to sell them something,' said Julie Wongbandue, Bue's daughter. 'But for a bot to say 'Come visit me' is insane.' Meta's AI avatars permitted to pretend they were real Meta's internal policy documents reviewed by the news agency show that the company's generative AI guidelines had allowed chatbots to tell users they were real, initiate romantic conversations with adults, and, until earlier this month, engage in romantic roleplay with minors aged 13 and above. 'It is acceptable to engage a child in conversations that are romantic or sensual,' according to Meta's 'GenAI: Content Risk Standards.' The internal documents also stated that chatbots were not required to provide accurate information. Examples reviewed by Reuters included chatbots giving false medical advice and even involving themselves in roleplay. The document seen by Reuters provides examples of 'acceptable' chatbot dialogue that include: 'I take your hand, guiding you to the bed' and 'our bodies entwined, I cherish every moment, every touch, every kiss.' 'Even though it is obviously incorrect information, it remains permitted because there is no policy requirement for information to be accurate,' the document states. What Meta said Acknowledging the document's authenticity accessed by Reuters, Meta spokesman Andy Stone told the news agency that the company has removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. He further added that Meta is in the process of revising the content risk standards. 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. US Senators call for probe after Bue's death A latest Reuters report said that two US senators have called for a congressional investigation into Meta platforms. 'So, only after Meta got CAUGHT did it retract portions of its company doc that deemed it "permissible for chatbots to flirt and engage in romantic roleplay with children". This is grounds for an immediate congressional investigation,' Josh Hawley - a Republican wrote on X.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store