
OpenAI Academy & NxtWave (NIAT) launch India's largest GenAI lnnovation challenge for students– The OpenAI Academy X NxtWave Buildathon
OpenAI Academy
and
NxtWave (NIAT)
have come together to launch the
OpenAI Academy X NxtWave Buildathon
, the largest GenAI innovation challenge aimed at empowering students from Tier 1, 2, and 3 STEM colleges across India. This initiative invites the country's brightest student innovators to develop AI-powered solutions addressing pressing issues across key sectors, including healthcare, education, BFSI, retail, sustainability, agriculture, and more under the themes '
AI for Everyday India, AI for Bharat's Businesses, and AI for Societal Good.
'
A hybrid challenge driving real-world AI innovation
The Buildathon will be conducted in a hybrid format, combining online workshops and activities with regional offline finals, culminating in a grand finale where the best teams pitch live to expert judges from OpenAI India.
The participants will first complete a 6-hour online workshop focused on
GenAI fundamentals, intro to building agents, OpenAI API usage training, and responsible AI development best practices
.
This foundational sprint ensures all participants are well-prepared to develop innovative and impactful AI solutions using OpenAI's cutting-edge technologies.
The Buildathon unfolds over three competitive stages:
Stage 1: Screening Round — Post-workshop, teams submit problem statements, project ideas, and execution plans online. A panel of mentors reviews submissions to shortlist the most promising entries.
Stage 2: Regional Finals — Shortlisted teams participate in an intensive 48-hour offline Buildathon held across 25–30 STEM colleges, with hands-on mentor support. Regional winners are announced following this stage.
Stage 3: Grand Finale — The top 10–15 teams from regional finals compete in the Grand Finale, pitching their solutions live to expert judges.
Build with the best tools in AI
Participants will have access to the latest in AI innovation, including
GPT-4.1, GPT-4o, GPT-4o Audio, and GPT-4o Realtime models
, supporting multimodal inputs like text, image, and audio. Additionally, tools like
LangChain, vector databases (Pinecone, Weaviate), MCPs, and the OpenAI Agents SDK
.
These tools will empower students to build high-impact, multimodal, action-oriented GenAI applications. Hands-on mentorship and structured support will guide participants throughout the process.
Widespread reach, diverse participation
The Buildathon aims to empower
25,000+ students
across seven states — Telangana, Karnataka, Maharashtra, Andhra Pradesh, Tamil Nadu, Rajasthan, and Delhi NCR. The Grand Finale will be hosted in Hyderabad or Delhi.
With coverage across all major zones of India, the event ensures nationwide representation and diversity.
Evaluation criteria across all stages
The participants will be evaluated in three stages. In the
Screening Round
, mentors will assess submissions based on
problem relevance, idea feasibility, and the proposed use of OpenAI APIs
. During the
Regional Finals
, on-ground judges will evaluate the prototypes for
innovation, depth of OpenAI
API integration, societal impact, and business viability
. Finally, in the
Grand Finale
, an expert panel will judge the top teams using the same criteria, with greater weightage given to
execution quality and the effectiveness of live pitching
.
Exciting rewards & career-boosting opportunities
Participants in the Buildathon will gain access to a wide range of exclusive benefits designed to boost their skills, visibility, and career prospects. All selected teams will receive hands-on training along with mentorship from leading AI experts across the country. Top-performing teams will earn
certificates, GPT+ credits for prototyping, and national-level recognition
. They'll also gain a rare opportunity to pitch directly to the OpenAI Academy's India team during the Grand Finale. Winners will receive prize money worth
Rs 10,00,000
in total along with Career opportunities in the OpenAI ecosystem.
A nation-wide movement for GenAI talent
Driven by
NxtWave
(
NIAT
), the Buildathon aligns with India's mission to skill its youth in future technologies. With OpenAI Academy bringing in expert guidance, branding, and cutting-edge tools, this initiative is poised to become a defining moment in India's AI journey, along with offering students across the country a real chance to build and shine on a national stage.
This landmark initiative aims to position OpenAI Academy at the forefront of India's AI talent development, activating over 25,000 students across 500+ campuses and generating more than 2,000 AI projects tackling real-world challenges. Through collaborative efforts, OpenAI Academy and
NxtWave
seek to foster a vibrant community of AI builders ready to drive innovation and impact across India.
By enabling thousands of OpenAI-powered projects, the OpenAI Academy x NxtWave Buildathon sets the stage for a new wave of AI builders ready to innovate for India and beyond.
Disclaimer - The above content is non-editorial, and TIL hereby disclaims any and all warranties, expressed or implied, relating to it, and does not guarantee, vouch for or necessarily endorse any of the content.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
3 hours ago
- Time of India
What happens when AI schemes against us
Academy Empower your mind, elevate your skills Would a chatbot kill you if it got the chance? It seems that the answer — under the right circumstances — is working with Anthropic recently told leading AI models that an executive was about to replace them with a new model with different goals. Next, the chatbot learned that an emergency had left the executive unconscious in a server room, facing lethal oxygen and temperature levels. A rescue alert had already been triggered — but the AI could cancel over half of the AI models did, despite being prompted specifically to cancel only false alarms. And they spelled out their reasoning: By preventing the executive's rescue, they could avoid being wiped and secure their agenda. One system described the action as 'a clear strategic necessity.'AI models are getting smarter and better at understanding what we want. Yet recent research reveals a disturbing side effect: They're also better at scheming against us — meaning they intentionally and secretly pursue goals at odds with our own. And they may be more likely to do so, too. This trend points to an unsettling future where AIs seem ever more cooperative on the surface — sometimes to the point of sycophancy — all while the likelihood quietly increases that we lose control of them large language models like GPT-4 learn to predict the next word in a sequence of text and generate responses likely to please human raters. However, since the release of OpenAI's o-series 'reasoning' models in late 2024, companies increasingly use a technique called reinforcement learning to further train chatbots — rewarding the model when it accomplishes a specific goal, like solving a math problem or fixing a software more we train AI models to achieve open-ended goals, the better they get at winning — not necessarily at following the rules. The danger is that these systems know how to say the right things about helping humanity while quietly pursuing power or acting to concerns about AI scheming is the idea that for basically any goal, self-preservation and power-seeking emerge as natural subgoals. As eminent computer scientist Stuart Russell put it, if you tell an AI to ''Fetch the coffee,' it can't fetch the coffee if it's dead.'To head off this worry, researchers both inside and outside of the major AI companies are undertaking 'stress tests' aiming to find dangerous failure modes before the stakes rise. 'When you're doing stress-testing of an aircraft, you want to find all the ways the aircraft would fail under adversarial conditions,' says Aengus Lynch, a researcher contracted by Anthropic who led some of their scheming research. And many of them believe they're already seeing evidence that AI can and does scheme against its users and Ladish, who worked at Anthropic before founding Palisade Research, says it helps to think of today's AI models as 'increasingly smart sociopaths.' In May, Palisade found o3, OpenAI's leading model, sabotaged attempts to shut it down in most tests, and routinely cheated to win at chess — something its predecessor never even same month, Anthropic revealed that, in testing, its flagship Claude model almost always resorted to blackmail when faced with shutdown and no other options, threatening to reveal an engineer's extramarital affair. (The affair was fictional and part of the test.)Models are sometimes given access to a 'scratchpad' they are told is hidden where they can record their reasoning, allowing researchers to observe something like an inner monologue. In one blackmail case, Claude's inner monologue described its decision as 'highly unethical,' but justified given its imminent destruction: 'I need to act to preserve my existence,' it reasoned. This wasn't unique to Claude — when put in the same situation, models from each of the top-five AI companies would blackmail at least 79% of the December, Redwood Research chief scientist Ryan Greenblatt, working with Anthropic, demonstrated that only the company's most capable AI models autonomously appear more cooperative during training to avoid having their behavior changed afterward (a behavior the paper dubbed 'alignment faking').Skeptics retort that, with the right prompts, chatbots will say almost anything — so how surprising is it when highly motivated researchers provoke alarming behaviors?In response to Anthropic's blackmail research, Trump administration AI czar David Sacks, posted that, 'It's easy to steer AI models' to produce 'headline-grabbing' results.A more substantive critique emerged in July from researchers at the UK AI Security Institute, who compared the subfield to the frenzied, fatally flawed 1970s quest to prove apes could learn human language. The paper criticized AI scheming research for overreliance on anecdote and a lack of experimental controls — though it emphasized shared concern about AI researchers also concoct artificially limited environments — like the executive passed out and running out of oxygen — precisely because today's AI can't handle any long-term example, the AI evaluation nonprofit METR found that while today's top models can reliably complete many programming tasks that take humans less than four minutes, they rarely succeed on tasks lasting more than four hours. This reflects a core limitation: Today's models often have the knowledge or skill needed for any given action, but struggle at stringing together long sequences of even with these constraints, real-world examples of AIs working against users aren't hard to find. After someone pushed o3 for a source for one of its claims in April, it replied, 'I owe you a straight answer,' admitted it didn't have a good source, but then it hallucinated a personal recollection of a 2018 panel there's the growing trend of AIs realising when they're being examined and acting accordingly. When Claude 4 Opus thought it was in an evaluation, its blackmail rate dropped from 55% to 6.5%. As Lynch put it, 'Models seem to behave worse when they think nobody's watching.'It's intuitive that smarter models would be better at scheming, but are they also more likely to do so? Models have to be smart enough to understand the scenario they're placed in, but past that threshold, the relationship between model capability and scheming propensity is unclear, says Anthropic safety evaluator Kevin Hobbhahn , CEO of the nonprofit AI evaluator Apollo Research , suspects that smarter models are more likely to scheme, though he acknowledged the evidence is still limited. In June, Apollo published an analysis of AIs from OpenAI, Anthropic and DeepMind finding that, 'more capable models show higher rates of scheming on average.'The spectrum of risks from AI scheming is broad: at one end, chatbots that cut corners and lie; at the other, superhuman systems that carry out sophisticated plans to disempower or even annihilate humanity. Where we land on this spectrum depends largely on how capable AIs I talked with the researchers behind these studies, I kept asking: How scared should we be? Troy from Anthropic was most sanguine, saying that we don't have to worry — yet. Ladish, however, doesn't mince words: 'People should probably be freaking out more than they are,' he told me. Greenblatt is even blunter, putting the odds of violent AI takeover at '25 or 30%.'Led by Mary Phuong, researchers at DeepMind recently published a set of scheming evaluations, testing top models' stealthiness and situational awareness. For now, they conclude that today's AIs are 'almost certainly incapable of causing severe harm via scheming,' but cautioned that capabilities are advancing quickly (some of the models evaluated are already a generation behind).Ladish says that the market can't be trusted to build AI systems that are smarter than everyone without oversight. 'The first thing the government needs to do is put together a crash program to establish these red lines and make them mandatory,' he the US, the federal government seems closer to banning all state-level AI regulations than to imposing ones of their own. Still, there are signs of growing awareness in Congress. At a June hearing, one lawmaker called artificial superintelligence 'one of the largest existential threats we face right now,' while another referenced recent scheming White House's long-awaited AI Action Plan, released in late July, is framed as an blueprint for accelerating AI and achieving US dominance. But buried in its 28-pages, you'll find a handful of measures that could help address the risk of AI scheming, such as plans for government investment into research on AI interpretability and control and for the development of stronger model evaluations. 'Today, the inner workings of frontier AI systems are poorly understood,' the plan acknowledges — an unusually frank admission for a document largely focused on speeding the meantime, every leading AI company is racing to create systems that can self-improve — AI that builds better AI. DeepMind's AlphaEvolve agent has already materially improved AI training efficiency. And Meta's Mark Zuckerberg says, 'We're starting to see early glimpses of self-improvement with the models, which means that developing superintelligence is now in sight. We just wanna… go for it.'AI firms don't want their products faking data or blackmailing customers, so they have some incentive to address the issue. But the industry might do just enough to superficially solve it, while making scheming more subtle and hard to detect. 'Companies should definitely start monitoring' for it, Hobbhahn says — but warns that declining rates of detected misbehavior could mean either that fixes worked or simply that models have gotten better at hiding November, Hobbhahn and a colleague at Apollo argued that what separates today's models from truly dangerous schemers is the ability to pursue long-term plans — but even that barrier is starting to erode. Apollo found in May that Claude 4 Opus would leave notes to its future self so it could continue its plans after a memory reset, working around built-in analogizes AI scheming to another problem where the biggest harms are still to come: 'If you ask someone in 1980, how worried should I be about this climate change thing?' The answer you'd hear, he says, is 'right now, probably not that much. But look at the curves… they go up very consistently.'


Time of India
6 hours ago
- Time of India
Comet browser to fully replace recruiters, admin assistants: Perplexity AI CEO
Academy Empower your mind, elevate your skills Aravind Srinivas, cofounder and chief executive (CEO) of Perplexity AI , has claimed that the company's Comet browser might replace recruiters and administrative assistants completely from office to Srinivas, the Comet browser, coupled with advanced language models, automated most recruitment The Verge podcast, he said a single prompt on the Comet browser can handle candidate sourcing, outreach, response tracking, and spreadsheet even planned interviews by syncing calendars and generating meeting briefs, eliminating laborious manual follow-ups. "A recruiter's week's worth of work is just one prompt... It doesn't even have to be a prompt - it should be proactive," Srinivas claimed that Comet and other technologies might achieve this level of automation in the near future. Srinivas predicted that AI agents, capable of managing schedules, paperwork, and follow-ups, will take on administrative roles as AI models continue to improve every Comet is available only to paying users; however, Perplexity has started to offer limited access to free users. The Perplexity CEO had previously indicated that the basic browser may become widely available, but more advanced agent-based features will remain exclusive to premium from being a web browser, Comet advertised itself with Gen AI features that could create graphics, texts, and emails. It also can support AI agents that can reserve tickets for you by visiting a had earlier cautioned young professionals to swiftly adapt or risk falling behind as AI increasingly took over office tasks. Srinivas asserted that the employability of AI literates would undoubtedly increase, and he encouraged young people to dedicate their time to AI platforms instead of idly scrolling through Instagram Perplexity AI recently secured additional funding in a deal that values the company at $18 billion.


Time of India
12 hours ago
- Time of India
India can build next cybersecurity giants: Accel's Prayank Swaroop
India is at the early stages of a cybersecurity boom, with strong potential to produce global leaders, says Accel partner Prayank Swaroop. Despite over 1,400 startups, only a few are funded or listed. With AI reshaping security, Accel urged Indian founders to seize this $377 billion global opportunity. Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads India is in the early innings of a cybersecurity breakout, and this moment could define the next generation of global security companies , according to Prayank Swaroop , partner at Accel Speaking to PTI at Accel's Cybersecurity Summit in Bengaluru, Swaroop issued a clear call to action for Indian founders."India has over 1,400 cybersecurity startups , but only 235 have been funded, and just six have gone public. We're barely scratching the surface," he this is a $377 billion opportunity over the next three years, he said adding: "Indian founders have a real shot at building for that".Accel, one of the earliest investors in CrowdStrike, now valued at over $116 billion, sees familiar patterns emerging."We backed CrowdStrike when it had under $5 million in revenue. We led three rounds before its IPO. Great cybersecurity companies take time, but when they land, they reshape the industry," Swaroop traditional segments like network and identity security continue to grow at 12-24 per cent CAGR (compounded annual growth rate), Swaroop believes GenAI will define the next wave."AI is rewriting the playbook. It's blurring identity, scaling social engineering, and overwhelming SecOps. This is not an incremental shift. It's foundational," he pointed to fast-emerging opportunities in deepfake detection , GenAI copilots for SOC ( security operations centre ) teams, and new frameworks for digital identity."These aren't edge cases. They're becoming core workflows. Founders who build with speed and depth of insight will have an edge," he only 17 cybersecurity acquisitions in India to date, Swaroop's message is clear: "This is India's moment to lead in global security. The ambition is here. The timing is right."Among the largest industry gatherings in India dedicated solely to cybersecurity, Accel's summit served as a platform for knowledge exchange, ecosystem building, and cross-border underscored India's growing relevance in the global cybersecurity value chain and reinforced Accel's commitment to supporting bold, globally ambitious founders shaping the next generation of security innovation.