
NimbleEdge Open-Sources the Future of AI, Launches World's First On-Device Agentic AI Platform
With the launch of DeliteAI, NimbleEdge is putting a full-stack, production-ready on-device AI platform into the hands of developers and ML engineers. The open source stack enables fast, cross-platform deployment of transformer models, LLMs, and multimodal AI, all without relying on cloud infrastructure or high-end GPUs.
This comprehensive launch includes three major components:
-A production-ready SDK with an optimized inference stack and the industry's first on-device Python runtime for orchestrating agentic workflows
-A dedicated Agent Marketplace where developers can discover and integrate pre-built AI agents in their mobile applications
-The NimbleEdge Assistant, the world's first fully on-device conversational AI assistant with built-in productivity capabilities
'We believe AI shouldn't sit miles away from us in data centres ,' said Varun Khare, Co-Founder and CEO of NimbleEdge. 'With this launch, any mobile application can scale AI to billions of users while improving data safety and user privacy. They own the entire stack, from the models they use to the way intelligence shows up in their products bringing a unique AI enabled experience to their users.'
Designed to address the limitations of cloud-based AI, which requires constant connectivity, increases latency, creates privacy risks, and incurs unsustainable operational costs, NimbleEdge also fills critical gaps in the existing on-device AI ecosystem. Until now, developers lacked unified tooling, standardized runtimes, and a robust marketplace of ready-to-integrate agents to build sophisticated AI-native experiences on smartphones.
NimbleEdge's platform empowers any company or developer to bring their own models, including Llama, Gemma, or Qwen, and run all inference directly on the user's device, abstracting away the complexities of diverse mobile hardware and managing runtimes like ONNX, LiteRT, or ExecuTorch. This architecture ensures that no personal data ever leaves local hardware, allowing organizations to fine-tune and deploy large language models and agents entirely offline. Unlike proprietary assistants tied to a single ecosystem, NimbleEdge makes it possible to create completely customized workflows and branded assistants all powered by an open-source, on-device ecosystem without external dependencies.
Neeraj Poddar, Co-Founder and CTO at NimbleEdge, added, 'For the first time, developers can bring state-of-the-art AI models to consumer devices, orchestrate them with Python, and deploy truly private AI agents at scale. This is the missing infrastructure layer and developer tooling we wished existed when we were building distributed systems at global scale, and now it's open for everyone.'
NimbleEdge has already demonstrated this capability at scale, powering AI & ML experiences across more than 30 million devices in production deployments for gaming and e-commerce apps, with AI infrastructure partners such as PyTorch and ONNX. The platform's on-device Python runtime allows developers to build dynamic, real-time applications with familiar tools while maintaining full control over data flows, and the Agent Marketplace provides a growing library of plug-and-play agents for tasks like summarization, recommendations, and speech processing.
Aakrit Vaish, early investor in NimbleEdge and a member of the India AI Mission, commented, 'NimbleEdge reflects the innovative work happening in India's AI ecosystem. By bringing agentic AI capabilities on-device, they're addressing important challenges around privacy and latency and creating open source infra that can reach users at scale in India and globally.'
India's AI Mission aims to build sovereign, privacy-preserving AI infrastructure that can scale to 1.4 billion people. NimbleEdge's open, on-device platform aligns with this vision by enabling AI to run natively on India's vast base of smartphones, reducing reliance on scarce data center compute and ensuring personal data stays on users' devices. By complementing public initiatives like UPI, ONDC, and the DPDP Act, NimbleEdge can help accelerate India's leadership in trusted, accessible AI.
The NimbleEdge Platform, Agent Marketplace, and Assistant are available today. Developers can explore the source code on GitHub and join the DeliteAI Discord community to connect and collaborate. Enterprises can engage NimbleEdge for deployment support and advanced features. NimbleEdge is committed to redefining the future of AI with an open, privacy-first infrastructure that democratizes access to advanced intelligence on billions of devices worldwide.
Website: https://www.nimbleedge.com/
(ADVERTORIAL DISCLAIMER: The above press release has been provided by VMPL. ANI will not be responsible in any way for the content of the same)
This story is auto-generated from a syndicated feed. ThePrint holds no responsibility for its content.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
an hour ago
- Indian Express
Are you in a mid-career to senior job? Don't fear AI – you could have this important advantage
Have you ever sat in a meeting where someone half your age casually mentions 'prompting ChatGPT' or 'running this through AI', and felt a familiar knot in your stomach? You're not alone. There's a growing narrative that artificial intelligence (AI) is inherently ageist, that older workers will be disproportionately hit by job displacement and are more reluctant to adopt AI tools. But such assumptions – especially that youth is a built-in advantage when it comes to AI – might not actually hold. While ageism in hiring is a real concern, if you have decades of work experience, your skills, knowledge and judgement could be exactly what's needed to harness AI's power – without falling into its traps. The research on who benefits most from AI at work is surprisingly murky, partly because it's still early days for systematic studies on AI and work. Some research suggests lower-skilled workers might have more to gain than high-skilled workers on certain straightforward tasks. The picture becomes much less clear under real-world conditions, especially for complex work that relies heavily on judgement and experience. Through our Skills Horizon research project, where we've been talking to Australian and global senior leaders across different industries, we're hearing a more nuanced story. Many older workers do experience AI as deeply unsettling. As one US-based CEO of a large multinational corporation told us: 'AI can be a form of existential challenge, not only to what you're doing, but how you view yourself.' But leaders are also observing an important and unexpected distinction: experienced workers are often much better at judging the quality of AI outputs. This might become one of the most important skills, given that AI occasionally hallucinates or gets things wrong. The CEO of a South American creative agency put it bluntly: 'Senior colleagues are using multiple AIs. If they don't have the right solution, they re-prompt, iterate, but the juniors are satisfied with the first answer, they copy, paste and think they're finished. They don't yet know what they are looking for, and the danger is that they will not learn what to look for if they keep working that way.' Experienced workers have a crucial advantage when it comes to prompting AI: they understand context and usually know how to express it clearly. While a junior advertising creative might ask an AI to 'Write copy for a sustainability campaign', a seasoned account director knows to specify 'Write conversational social media copy for a sustainable fashion brand targeting eco-conscious millennials, emphasising our client's zero-waste manufacturing process and keeping the tone authentic but not preachy'. This skill mirrors what experienced professionals do when briefing junior colleagues or freelancers: providing detailed instructions, accounting for audience, objectives, and constraints. It's a competency developed through years of managing teams and projects. Younger workers, despite their comfort with technology, may actually be at a disadvantage here. There's a crucial difference between using technology frequently and using it well. Many young people may become too accustomed to AI assistance. A survey of US teens this year found 72 per cent had used an AI companion app. Some children and teens are turning to chatbots for everyday decisions. Without the professional experience to recognise when something doesn't quite fit, younger workers risk accepting AI responses that feel right – effectively 'vibing' their work – rather than developing the analytical skills to evaluate AI usefulness. First, everyone benefits from learning more about AI. In our time educating everyone from students to senior leaders and CEOs, we find that misunderstandings about how AI works have little to do with age. A good place to start is reading up on what AI is and what it can do for you: What is AI? Where does AI come from? How does AI learn? What can AI do? What makes a good AI prompt? If you're not even sure which AI platform to try, we would recommend testing the most prominent ones, OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini. If you're an experienced worker feeling threatened by AI, lean into your strengths. Your decades of experience with delegation, context-setting, and critical evaluation are exactly what AI tools need. Start small. Pick one regular work task and experiment with AI assistance, using your judgement to evaluate and refine outputs. Practice prompting like you're briefing a junior colleague: be specific about context, constraints, and desired outcomes, and repeat the process as needed. Most importantly, don't feel threatened. In a workplace increasingly filled with AI-generated content, your ability to spot what doesn't quite fit, and to know what questions to ask, has never been more valuable.


Economic Times
an hour ago
- Economic Times
OpenAI announces million-dollar bonuses to nearly 1,000 employees to retain AI talent
Agencies Sam Altman, CEO, OpenAI ChatGPT maker OpenAI has announced massive bonus payouts for about 1,000 employees, which is approximately one-third of its full-time the eve of GPT-5's launch, OpenAI CEO Sam Altman sent a surprise message to employees via communication platform Slack. A quarterly bonus for two years was awarded to researchers and software engineers in the firm's applied engineering, scaling, and safety domains, according to The Verge. The payouts vary by role and seniority. Top researchers will receive mid-single-digit millions as bonus, while engineers will get hundreds of thousands. Bonuses will be distributed quarterly for two years and can be received in stock, cash, or a combination of both. Altman informed that the rise in compensation was a result of market dynamics, likely driven by the demand for AI talent."As we mentioned a few weeks ago, we have been looking at comp for our technical teams given the movement in the market," The Verge cited Altman's message to employees as saying."We very much intend to keep increasing comp as we keep doing better and better as a company," he wrote. "But we wanted to be transparent about this one since it's a new thing for us," he giants and well-funded startups in Silicon Valley are intensifying competition for AI expertise, announcing bonuses to attract talent. Altman has recently lost several key researchers to Meta, while Elon Musk's xAI is also seeking to attract is OpenAI's second-largest market in the world after the US, and it may well become its biggest market in the near future, according to its CEO Sam is available to all users, with Plus subscribers getting more usage, and Pro subscribers getting access to GPT‑5 pro, a version with extended reasoning for even more comprehensive and accurate answers."GPT‑5 is a unified system with a smart, efficient model that answers most questions, a deeper reasoning model (GPT‑5 thinking) for harder problems, and a real‑time router that quickly decides which to use based on conversation type, complexity, tool needs, and your explicit intent," the company noted. Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Can Coforge's ambition to lead the IT Industry become a reality? How Mukesh Ambani's risky bet has now become Reliance's superpower Berlin to Bharuch: The Borosil journey after the China hit in Europe As RBI retains GDP forecast, 4 factors that will test the strength of Indian economy In a flat market, are REITs the sweet spot between growth and safety? These large- and mid-cap stocks may give more than 25% return in 1 year, according to analysts Buy, Sell or Hold: Avendus trims target on Titan Company; Motila Oswal maintains buy on Jindal Stainless Stock picks of the week: 5 stocks with consistent score improvement and return potential of more than 23% in 1 year


Mint
an hour ago
- Mint
OpenAI's $500 billion ambition puts it in elite club—and in the crosshairs
Just a week after OpenAI secured fresh funding at a $300-billion valuation, reports emerged of potential share sales at $500 billion. If it goes through, that kind of valuation would place OpenAI among only around 20 companies valued at over half a trillion dollars globally. OpenAI's latest funding round, worth $8.3 billion, was oversubscribed five times. The investor appetite reflected their confidence in the AI startup's ability to dominate a market that the UN Trade and Development projects will explode by 25 times in size in a decade. OpenAI's momentum is undeniable. The company has continuously upgraded its flagship ChatGPT product, recently launching GPT-5, which it claims can provide PhD-level expertise. Financially, its revenues have doubled in seven months, reaching $1 billion a month, with projections to hit $20 billion in annualised revenue by the end of the year. The capital influx will primarily help OpenAI scale its compute infrastructure, particularly Stargate, a joint venture with Japanese investment firm SoftBank and technology company Oracle to build the world's largest AI supercomputing infrastructure. OpenAI is also setting up its first data centre in Europe next year, which will house 100,000 Nvidia processors. This infrastructure investment is critical as companies race to control the data centres and AI chips essential for training and operating advanced artificial intelligence models. The numbers reflect this reality. Global data centre capacity surged from 20GW in 2016 to 57GW in 2024, with Goldman Sachs projecting 122GW by 2030. While OpenAI's valuation reflects investor confidence, the fundraising itself underscores the infrastructure investments needed to maintain leadership in the AI market. Challenger pack OpenAI faces growing competition from well-funded AI startups. Anthropic, founded by former OpenAI employees, is nearing a $5 billion funding round that would value it at $170 billion, up from $61.5 billion in March. Elon Musk's xAI has raised $10 billion at an $80 billion valuation and is seeking additional funding at a potential $200 billion valuation. Venture capital funding to AI companies has exceeded $40 billion in each of the past three quarters, according to Crunchbase. This financial backing is translating into competitive model performance. On the GPQA Diamond benchmark, which tests PhD-level science questions, xAI's Grok 4 Heavy scored 88.9% and Anthropic's Claude Opus 4.1 scored 80.9%. The landscape shifted when Chinese startup DeepSeek released powerful open-weight models available for free. OpenAI released its own open-weight models in response. The competition now spans both proprietary and open-source approaches. Incumbent advantage OpenAI also faces pressure from the Big Tech firms. Meta, Google, Amazon, and Microsoft have collectively spent $291 billion over the past year, largely for AI infrastructure. Last month, in a $2.4 billion deal, Google hired key executives from Windsurf, an AI coding company that OpenAI wanted to acquire. Google has also integrated 'AI Overviews' with its search engine, turning it into an 'answer engine" that directly competes with the core function of chatbots like ChatGPT. This strategy leverages Google's 2 billion monthly users and its market dominance. Meta, meanwhile, is restructuring its AI division into Meta Superintelligence Labs. It has also acquired top-tier AI researchers from OpenAI, with multi-million-dollar compensation packages. Partner paradox OpenAI's relationship with Microsoft, however, has turned complicated. Microsoft, OpenAI's primary backer with a $13.75 billion investment, is also a direct competitor seeking to lead the AI revolution. Copilot, Microsoft's AI platform, boasts over 100 million monthly users. Microsoft's server products and cloud services revenue jumped 27% year-over-year in the three months ended 30 June, driven by growth in Azure, its cloud or remote computing platform. Microsoft holds crucial leverage as OpenAI attempts to convert into a for-profit company—a prerequisite for unlocking SoftBank funding and IPO plans. However, Microsoft has been withholding approval as both companies negotiate revising their contract, set to expire in 2030. A major sticking point is a clause that could terminate Microsoft's access to future OpenAI technology if the startup's board declares that artificial general intelligence—AI's capacity to learn and understand like humans and apply that knowledge to execute tasks—has been achieved. This friction has real consequences: OpenAI's attempt to acquire AI coding startup Windsurf failed because Microsoft's IP rights would have extended to the new technology, which Windsurf rejected. OpenAI needs capital to overcome these structural challenges and funding obstacles. is a database and search engine for public data