
India's Gen Z Embraces the GenAI Future: Great Learning Mobilises 16,000 Youth in a 3-Day GenAI Upskilling Sprint
Following competitive tracking through a leaderboard, the winning institutions were crowned in two categories. In both the GenAI Super Squad (highest number of unique participants) and GenAI Masterminds (most course completions) categories, Sri Eshwar College of Engineering, Coimbatore, New Horizon College of Engineering, and PES University, Bangalore secured the top three positions.
Commenting on the challenge, Aparna Mahesh, Chief Marketing Officer, Great Learning, said, 'What we witnessed with The Great Learning AI Challenge was truly heartening. Gen Z is not just adapting to the AI revolution; they are leading it with passion and purpose. While we did set out to offer free courses, it sparked something bigger, a sense of possibility, of confidence, of being future-ready. The energy and enthusiasm we witnessed from students across the country reflected a generation eager to shape India's future in technology and innovation. What began as a learning initiative quickly turned into a national movement and reminded us why we do what we do. At Great Learning, we are committed to keeping this momentum alive by empowering youth with meaningful opportunities to build successful, future-ready careers in an AI-driven world.'
Enrollment trends revealed a strong focus on core GenAI concepts, with Prompt Engineering for ChatGPT, Generative AI for Beginners and ChatGPT for Beginners being the most popular courses during the challenge, followed by ChatGPT for Marketing, ChatGPT for Excel, and Build a Website using ChatGPT. Some of the other courses that were of interest to students were Data Analytics Using ChatGPT with Excel and Python, ChatGPT for Business Communication and Interview Preparation Using Gemini.
Harshit Gajendran from New Horizon College of Engineering distinguished himself as the top performer of The Great Learning AI Challenge by completing 45 GenAI courses within just three days, a truly exceptional accomplishment that demonstrated both determination and a deep commitment to upskilling. Reflecting on his journey, he shared: 'The Great Learning AI Challenge was unlike anything I have done before. It really pushed me to step out of my comfort zone and dive deep into the world of Generative AI. Finishing all the courses in just three days wasn't easy, but it helped me build a strong understanding of everything from prompt engineering to problem-solving with AI. What made it truly special was being part of something bigger, a nationwide challenge where thousands of us were learning, competing, and growing together. Being named the top performer among thousands of participants is incredibly motivating. It feels like all the effort has truly paid off.'
The initiative, anchored by the campaign tagline 'Gen Z Goes GenAI — Because careers don't start with jobs, they start with skills', underscored the growing urgency for young professionals to build expertise in emerging technologies. Designed to accelerate GenAI adoption across India, the challenge served as an accessible and inclusive platform for learners to gain in-demand skills, earn recognised credentials, and engage with AI meaningfully. By offering high-quality learning at scale and no cost, the campaign helped bridge the gap between traditional education and industry expectations, while laying a strong foundation for India's youth to thrive in an increasingly AI-powered world.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Mint
27 minutes ago
- Mint
The spectre of AI is staring Indian IT in the face
The advance of artificial intelligence (AI) may shrink some business and lead to reshuffle of employees, but it will also create new revenue streams, Hexaware Technologies Ltd CEO Srikrishna Ramakarthikeyan said. His comments came after Hexaware slipped two places in the Indian IT pecking order to No.10, ending the June quarter with $382 million in revenue, up 2.8% sequentially. 'On the negative side, I think there will be some compression in IT operations when new deals are done or when deals are renewed. There's some compression in software development as a consequence of AI," Ramakarthikeyan said on Friday. New opportunities Still, he said AI is going to create new revenue, even as the negative impacts are not 'significant enough." 'On the other hand, I think AI is going to unleash a number of new revenue opportunities for our business. The first and foremost is data," he added. Hexaware is not the first to note the impact of disruption. GenAI's impact will be the highest in business process outsourcing, HCL Technologies Ltd MD and CEO C. Vijayakumar said at an event two years ago. "Second will be application development, as the role of Gen AI was minimal here. Application and infrastructure operations, incremental benefit would be marginal," Vijayakumar had said. While Gen AI was not depressing prices for existing services, it could start in 2024, Vijayakumar had said then. In February this year, he said HCL Tech has been trying to "deliver twice the revenue with half the people." Reshuffles While AI isn't currently changing staffing, future AI-powered coding may shift Hexaware's needs from junior staff to those with specialized knowledge, leading to minor workforce adjustments, all within the context of a global talent shortage," Ramakarthikeyan said. 'So, there could be marginal reshuffling, but that's what it takes to deliver service." "We also believe that AI/Gen AI will lead to compression of revenue for the industry in the next 24-36 months as companies self-cannibalize to hold on to their existing clients," Girish Pai, head of equity research for Bank of Baroda Capital Markets wrote in a note dated 26 July. Companies will look to achieve more with fewer people, an analyst said. 'It is very possible that a 1,000-person IT services startup using AI could achieve $1 billion in the next five years," said R. Wang, founder of Constellation Research. He said IT outsourcers may adjust headcount going forward. Recalibrating 'Expect more reductions over time as these tech majors have to recalibrate their workforces and also adjust to changing client expectations. We are in the midst of a massive transition that will transform white-collar work as we know it," Wang added. The Big Five of Indian IT reported revenue growth of 15-25% in FY22. Three years later, growth slowed to 3.8-4.3% for the full year, and two of them saw a revenue decline. At least one has analyst called out this trend in the past. "In particular, we believe that renewals will be challenging since customers will seek, and likely get, lower renewal prices than historical norms as the power and capabilities of generative AI increase," said Keith Bachman, an analyst at BMO Capital Advisors, in a note dated 5 December, 2024. Headcount This is prompting them to adopt different measures. On Sunday, TCS said it would cut 2% of jobs, partly attributing its decision to AI. Earlier, HCL Technologies said it would reduce headcount outside India as automation takes up lower-end skills. Wipro has asked many employees to complete a mandatory English assessment. Mint had reported in February that pay hikes for several senior executives at LTIMindtree would depend on a coding and general math-based assessment. "Our view is that AI-related productivity benefits could be meaningful in the 20-30% range over time. Hence, all services providers will need to 1) gain share and/ or 2) enable and capture new addressable market opportunities to sustain growth. We remain concerned on impact to long-term growth from AI efficiency," said BMO Capital Markets analysts Keith Bachman, Adam J. Holets, and Bradley Clark, in a note dated 23 July.


Time of India
40 minutes ago
- Time of India
Researchers train AI model to respond to online political posts, find quality of discourse improved
Researchers who trained a large language model to respond to online political posts of people in the US and UK, found that the quality of discourse improved. Powered by artificial intelligence (AI), a large language model (LLM) is trained on vast amounts of text data and therefore, can respond to human requests in the natural language. Polite, evidence-based counterarguments by the AI system -- trained prior to performing experiments -- were found to nearly double the chances of a high quality online conversation and "substantially increase (one's) openness to alternative viewpoints", according to findings published in the journal Science Advances. Being open to perspectives did not, however, translate into a change in one's political ideology, the researchers found. Large language models could provide "light-touch suggestions", such as alerting a social media user to the disrespectful tone of their post, author Gregory Eady, an associate professor of political science and data science at the University of Copenhagen, Denmark, told PTI. "To promote this concretely, it is easy to imagine large language models operating in the background to alert us to when we slip into bad practices in online discussions, or to use these AI systems as part of school curricula to teach young people best practices when discussing contentious topics," Eady said. Hansika Kapoor, research author at the department of psychology, Monk Prayogshala in Mumbai, an independent not-for-profit academic research institute, told PTI, "(The study) provides a proof-of-concept for using LLMs in this manner, with well-specified prompts, that can generate mutually exclusive stimuli in an experiment that compares two or more groups." Nearly 3,000 participants -- who identified as Republicans or Democrats in the US and Conservative or Labour supporters in the UK -- were asked to write a text describing and justifying their stance on a political issue important to them, as they would for a social media post. This was countered by ChatGPT -- a "fictitious social media user" for the participants -- which tailored its argument "on the fly" according to the text's position and reasoning. The participants then responded as if replying to a social media comment. "An evidence-based counterargument (relative to an emotion-based response) increases the probability of eliciting a high-quality response by six percentage points, indicating willingness to compromise by five percentage points, and being respectful by nine percentage points," the authors wrote in the study. Eady said, "Essentially, what you give in a political discussion is what you get: that if you show your willingness to compromise, others will do the same; that when you engage in reason-based arguments, others will do the same; etc." AI-powered models have been critiqued and scrutinised for varied reasons, including an inherent bias -- political, and even racial at times -- and for being a 'black box', whereby internal processes used to arrive at a result cannot be traced. Kapoor, who is not involved with the study, said that whilst appearing promising, a complete reliance on AI systems for regulating online discourse may not be advisable yet. The study itself involved humans to rate responses as well, she said. Additionally, context, culture, and timing would need to be considered for such regulation, she added. Eady too is apprehensive about "using LLMs to regulate online political discussions in more heavy-handed ways." Further, the study authors acknowledged that because the US and UK are effectively two-party systems, addressing the 'partisan' nature of texts and responses was straightforward. Eady added, "The ability for LLMs to moderate discussion might also vary substantially across cultures and languages, such as in India." "Personally, therefore, I am in favour of providing tools and information that enable people to engage in better conversations, but nevertheless, for all its (LLMs') flaws, allowing nearly as open a political forum as possible," the author added. Kapoor said, "In the Indian context, this strategy may require some trial-and-error, particularly because of the numerous political affiliations in the nation. Therefore, there may be multiple variables and different issues (including food politics) that will need to be contextualised for study here." Another study, recently published in the 'Humanities and Social Sciences Communications' journal, found that dark personality traits -- such as psychopathy and narcissism -- a fear of missing out (FoMO) and cognitive ability can shape online political engagement. Findings of researchers from Singapore's Nanyang Technological University suggest that "those with both high psychopathy (manipulative, self-serving behaviour) and low cognitive ability are the most actively involved in online political engagement." Data from the US and seven Asian countries, including China, Indonesia and Malaysia, were analysed. Describing the study "interesting", Kapoor pointed out that a lot more work needs to be done in India for understanding factors that drive online political participation, ranging from personality to attitudes, beliefs and aspects such as voting behaviour. Her team, which has developed a scale to measure one's political ideology in India (published in a pre-print paper), found that dark personality traits were associated with a disregard for norms and hierarchies.


Time of India
44 minutes ago
- Time of India
The unnerving future of AI-fueled video games
Academy Empower your mind, elevate your skills It sounds like a thought experiment conjured by René Descartes for the 21st citizens of a simulated city inside a video game based on "The Matrix" franchise were being awakened to a grim reality. Everything was fake, a player told them through a microphone, and they were simply lines of code meant to embellish a virtual world. Empowered by generative artificial intelligence like ChatGPT, the characters responded in panicked disbelief."What does that mean," said one woman in a grey sweater. "Am I real or not?"The unnerving demo, released two years ago by an Australian tech company named Replica Studios, showed both the potential power and the consequences of enhancing gameplay with artificial intelligence. The risk goes far beyond unsettling scenes inside a virtual world. As video game studios become more comfortable with outsourcing the jobs of voice actors, writers and others to artificial intelligence, what will become of the industry?At the pace the technology is improving, large tech companies such as Google Microsoft and Amazon are counting on their AI programs to revolutionise how games are made within the next few years."Everybody is trying to race toward AGI," said tech founder Kylan Gibbs, using an acronym for artificial generalised intelligence, which describes the turning point at which computers have the same cognitive abilities as humans. "There's this belief that once you do, you'll basically monopolise all other industries."In the earliest months after the rollout of ChatGPT in 2022, the conversation about artificial intelligence's role in gaming was largely about how it could help studios quickly generate concept art or write basic applications have accelerated quickly. This spring at the Game Developers Conference in San Francisco, thousands of eager professionals looking for employment opportunities were greeted with an eerie glimpse into the future of video from Google DeepMind, an artificial intelligence laboratory, lectured on a new program that might eventually replace human play testers with "autonomous agents" that can run through early builds of a game and discover developers hosted a demonstration of adaptive gameplay with an example of how artificial intelligence could study a short video and immediately generate level design and animations that would otherwise have taken hundreds of hours to executives behind the online gaming platform Roblox introduced Cube 3D, a generative AI model that could produce functional objects and environments from text descriptions in a matter of were not the solutions that developers were hoping to see after several years of extensive layoffs; another round of cuts in Microsoft's gaming division this month was a signal to some analysts that the company was shifting resources to artificial have suffered as expectations for hyperrealistic graphics turned even their bestselling games into financial losses. And some observers are worried that investing in AI programs with hopes of cutting overhead costs might actually be an expensive distraction from the industry's efficiency experts acknowledge that a takeover by artificial intelligence is coming for the video game industry within the next five years, and executives have already started preparing to restructure their companies in anticipation. After all, it was one of the first sectors to deploy AI programming in the 1980s, with the four ghosts who chase Pac-Man, each responding differently to the player's real-time movements. Sony did not respond to questions about the AI technology it is using for game Lee, a spokesperson for Microsoft, said, "Game creators will always be the centre of our overall AI efforts, and we empower our teams to decide on the use of generative AI that best supports their unique goals and vision."A spokesperson for Nintendo said the company did not have further comment beyond what one of its leaders, Shigeru Miyamoto, told The New York Times last year: "There is a lot of talk about AI, for example. When that happens, everyone starts to go in the same direction, but that is where Nintendo would rather go in a different direction."Over the past year, generative AI has shifted from a concept into a common tool within the industry, according to a survey released by organisers of the Game Developers Conference. A majority of respondents said their companies were using artificial intelligence, while an increasing number of developers expressed concern that it was contributing to job instability and all responses were negative. Some developers praised the ability to use AI programs to complete repetitive tasks like placing barrels throughout a virtual the impressive tech demos at the conference in late March, many developers admitted that their programs were still several years away from widespread use."There is a very big gap between prototypes and production," said Gibbs, who runs Inworld AI, a tech company that builds artificial intelligence programs for consumer applications in sectors such as gaming, health and learning. He appeared on a conference panel for Microsoft, where the company showed off its adaptive gameplay said large studios could face costs in the millions of dollars to upgrade their technology. Google, Microsoft and Amazon each hope to become the new backbone of the gaming sector by offering AI tools that would require studios to join their servers under expensive intelligence technology has developed so fast that it has surpassed Replica Studios, the team behind the tech demo based on the "Matrix" franchise. Replica went out of business this year because of the pace of competition from larger companies like chief technology officer, Eoin McCarthy, said that at the height of the demo's popularity, users were generating more than 100,000 lines of dialogue from nonplayer characters, or NPCs , which cost the startup about $1,000 per day to cost has fallen in recent years as the AI programs have improved, but he said that most developers were unaccustomed to these unbounded costs. There were also fears about how expensive it would be if NPCs started talking to one Replica announced it was ending the demo, McCarthy said, some players grew concerned about the fate of the NPCs. "'Were they going to continue to live or would they die?'" McCarthy recalled players asking. He would reply: "It is a technology demo. These people aren't real.