logo
Shastra VC and MGA Ventures Lead USD 1 Mn Investment in Sports Tech Startup KhiladiPro

Shastra VC and MGA Ventures Lead USD 1 Mn Investment in Sports Tech Startup KhiladiPro

Entrepreneura day ago

The capital infusion will be used to scale KPro's proprietary AI technology, expand its domestic footprint, and strengthen support systems for young athletes across India.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
KhiladiPro (KPro), the Bengaluru-based Visual AI sports tech startup, has raised USD 1 million in a funding round led by Shastra VC and MGA Ventures.
The round also saw participation from notable investors including M Pallonji, Jeena & Co., Ayaz Billawala, Nimesh Kampani, and Jaimin Bhat, former CFO of Kotak Bank.
The capital infusion will be used to scale KPro's proprietary AI technology, expand its domestic footprint, and strengthen support systems for young athletes across India.
Founded in August 2023 by Utkarsh Yadav, KPro is on a mission to democratise athletic talent discovery and youth fitness development using cutting-edge Visual AI. "This funding validates our mission to make world-class sports science and coaching accessible to every child in India on their smartphones, regardless of geography or background," said Yadav. "We're empowering current and future generations of khiladis to chase their sporting dreams."
KPro's offerings include AI-driven sports ability tests for cricket and badminton, the Khiladi Ability Index (KAI)—India's first AI benchmark for youth fitness—KPro Olympiad for schools, and Khiladi Klub for high-potential youth. These tools allow mobile-based, standardised assessments that generate expert-level insights and personalized video feedback. Built on global fitness frameworks like Fundamental Motor Skills (FMS) and Long-Term Athlete Development (LTAD), KPro enables early talent identification and structured athletic growth.
With over 56 proprietary AI models developed in-house and collaborations with major sports associations such as Karnataka Badminton Association and the Handball Association of India, KPro aims to conduct 200,000 assessments by 2025. Its inclusive 6-pincode marketing strategy targets outreach in India's underserved Tier III and IV towns, aligning closely with national initiatives like Khelo India and the 2036 Olympic vision.
Investor Jay Desai of MGA Ventures highlighted, "KhiladiPro represents the rare confluence of deep-tech innovation and social impact. It's one of the most exciting early-stage ventures in India's sports-tech space."
Now poised for international expansion to Australia and the UAE, KPro is not just redefining youth fitness—it's laying the foundation for India's Olympic future through technology, inclusion, and purpose-driven innovation.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Talent Pipeline: How Nations Compete In The Global AI Race
AI Talent Pipeline: How Nations Compete In The Global AI Race

Forbes

time34 minutes ago

  • Forbes

AI Talent Pipeline: How Nations Compete In The Global AI Race

The artificial intelligence revolution is often framed as a clash of algorithms and infrastructure, but its true battleground lies in the minds shaping it – the AI talent. While nations vie for technological supremacy, the real contest revolves around human capital-who educates, retains, and ethically deploys the brightest minds. This deeper struggle, unfolding across universities, immigration offices, and corporate labs, beneath the surface of flashy model releases and geopolitical posturing, will determine whether AI becomes a force for equitable progress or a catalyst for deeper global divides. China's educational machinery is producing graduates at an unprecedented scale. In 2024, approximately 11.79 million students graduated from Chinese universities, an increase of 210,000 from the previous year, with around 1.6 million specializing in engineering and technology in 2022. By comparison, the United States produces far fewer graduates (around 4.1 million overall in 2022), with only 112,000 graduating with computer and information science degrees. This numerical advantage is reshaping the global talent landscape. According to MacroPolo's Global AI Talent Tracker, China has expanded its domestic AI talent pool significantly, with the percentage of the world's top AI researchers originating from China (based on undergraduate degrees) rising from 29% in 2019 to 47% in 2022. However, the United States remains "the top destination for top-tier AI talent to work," hosting approximately 60% of top AI institutions. Despite this, America's approach to talent faces structural challenges. The H-1B visa cap, which limits skilled foreign workers, is 'set at 65,000 per fiscal year, with an additional 20,000 visas for those with advanced U.S. degrees'. This constraint affects the tech industry's ability to recruit globally. This self-imposed constraint forces companies like Google and Microsoft to establish research centers in Toronto and London – a technological offshoring driven not by cost but by talent access. Europe's predicament completes this global triangle of talent misallocation. The continent invests heavily in AI education – ETH Zurich and Oxford produce world-class specialists-only to watch a significant number board one-way flights to California, lured by higher salaries and cutting-edge projects. This exodus potentially creates a vicious cycle: fewer AI experts means fewer competitive startups, which means fewer reasons for graduates to stay – a continental brain drain that undermines Europe's digital sovereignty. As AI permeates daily life, a quieter conflict emerges: balancing innovation with ethical guardrails. Europe's GDPR has levied huge fines since 2018 for data misuse, such as the $1.3 billion imposed on Meta for transferring EU citizens' data to the US without adequate safeguards, while China's surveillance networks track citizens through 626 million CCTV cameras. U.S. giant Apple was recently fined $95 million amid claims they were spying on users over a decade-long period. These divergent approaches reflect fundamentally different visions for AI's role in society. 'AI is a tool, and its values are human values. That means we need to be responsible developers as well as governors of this technology-which requires a framework,' argues Fei-Fei Li, Stanford AI pioneer. Her call for ethical ecosystems contrasts sharply with China's state-driven model, where AI development aligns with national objectives like social stability and economic planning. The growing capabilities of AI raise significant questions about the future of work. Andrew Ng, co-founder of Google Brain, observes: 'AI software will be in direct competition with a lot of people for a lot of jobs.' McKinsey estimates that 30% of U.S. jobs could be automated by 2030, but impacts will vary wildly. According to the WEF Future of Jobs Report 2025, 'on average, workers can expect that two-fifths (39%) of their existing skill sets will be transformed or become outdated over the 2025-2030 period.' The report goes on to state that, 'the fastest declining roles include various clerical roles, such as cashiers and ticket clerks, alongside administrative assistants and executive secretaries, printing workers, and accountants and auditors.' These changes are accelerated by AI and information processing technologies, as well as an increase in digital access for businesses. The panic that could be caused by these figures is palpable, but some are arguing that the real challenge isn't job loss – it's ensuring that displaced workers can transition to new roles. Countries investing in vocational AI training, like Germany's Industry 4.0 initiative, could potentially achieve smoother workforce transitions. By 2030, China could reverse its brain drain through initiatives like the Thousand Talents Plan, which has already repatriated over 8,000 scientists since 2008. If successful, this repatriation could supercharge domestic innovation while depriving Western labs of critical expertise. Europe's stringent AI Act may inadvertently cede ground to less regulated regions. Businesses could potentially start self-censoring AI projects to avoid EU compliance costs, with complexity, risk governance, data governance and talent listed as the 4 major challenges facing EU organisations in light of this Act, according to McKinsey. They went further to state that only 4% of their survey respondents thought that the requirements of the EU AI Act were even clear. This could create an innovation vacuum, pushing experimental AI development to jurisdictions with less stringent oversight. Breakthroughs in quantum computing could reshape talent flows. IBM's Condor and China's Jiuzhang 3.0 are vying to crack quantum supremacy, with the winner likely attracting a new wave of specialists. 'We need to make the world quantum-ready today by focusing on education and workforce development,' according to Alessandro Curioni, vice-president at IBM Research Europe. A recent WEF report on quantum technologies warns that, 'demand for experts is outpacing available talent and companies are struggling to recruit people in this increasingly competitive and strategic industry.' An IBM quantum data center dpa/picture alliance via Getty Images The focus on human talent suggests a more nuanced understanding of AI development – one that values creative problem-solving and ethical considerations alongside technical progress. "I think the future of global competition is, unambiguously, about creative talent," explains Vivienne Ming, executive chair and co-founder of Socos Labs. 'Everyone will have access to amazing AI. Your vendor on that will not be a huge differentiator. Your creative talent though-that will be who you are.' Similarly, Silvio Savarese, executive vice president and chief scientist at Salesforce AI Research, believes: 'AI is placing tools of unprecedented power, flexibility and even personalisation into everyone's hands, requiring little more than natural language to operate. They'll assist us in many parts of our lives, taking on the role of superpowered collaborators.' The employment landscape for AI talent faces complex challenges. In China, the record graduating class of 11.8 million students in 2024 confronts a difficult job market, with youth unemployment in urban areas at 18.8% in August, the highest for the year. These economic realities are forcing both countries to reconsider how they develop, attract, and retain AI talent. The competition isn't just about who can produce or attract the most talent – it's about who can create environments where that talent can thrive and innovate responsibly. The AI race transcends technical capabilities and infrastructure. While computing power, algorithms, and data remain important, human creativity, ethics, and talent ultimately determine how AI will shape our future. As nations compete for AI dominance, they must recognize that sustainable success requires nurturing not just technical expertise but also creative problem-solving and ethical judgment – skills that remain distinctly human even as AI capabilities expand. In this regard success isn't about which nation develops the smartest algorithms, but which creates environments where human AI talent can flourish alongside increasingly powerful AI systems. This human element may well be the deciding factor in who leads the next phase of the AI revolution.

Meta plans to automate many of its product risk assessments
Meta plans to automate many of its product risk assessments

Yahoo

time35 minutes ago

  • Yahoo

Meta plans to automate many of its product risk assessments

An AI-powered system could soon take responsibility for evaluating the potential harms and privacy risks of up to 90% of updates made to Meta apps like Instagram and WhatsApp, according to internal documents reportedly viewed by NPR. NPR says a 2012 agreement between Facebook (now Meta) and the Federal Trade Commission requires the company to conduct privacy reviews of its products, evaluating the risks of any potential updates. Until now, those reviews have been largely conducted by human evaluators. Under the new system, Meta reportedly said product teams will be asked to fill out a questionaire about their work, then will usually receive an "instant decision" with AI-identified risks, along with requirements that an update or feature must meet before it launches. This AI-centric approach would allow Meta to update its products more quickly, but one former executive told NPR it also creates 'higher risks,' as 'negative externalities of product changes are less likely to be prevented before they start causing problems in the world.' In a statement, Meta seemed to confirm that it's changing its review system, but it insisted that only 'low-risk decisions' will be automated, while 'human expertise' will still be used to examine 'novel and complex issues.' This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store