logo
ChatGPT is second most-used tool for learning new skills for students at IIT Bombay

ChatGPT is second most-used tool for learning new skills for students at IIT Bombay

Indian Express2 days ago
ChatGPT, the popular generative Artificial Intelligence (AI) chatbot, has emerged as the second most-used tool for learning new skills among students at the Indian Institute of Technology (IIT) Bombay—after online platforms like Coursera. This was revealed as Insight – the institute's official student media body, released its Senior Survey 2025 report on Friday. The findings are based on responses from 282 students.
Of the 272 students who answered the question, 'How did you generally study/learn new skills in the institute?', 118 cited online platforms, while 65 mentioned ChatGPT. Only 9 students said they used library books—the traditional method of learning new things in university settings.
Despite AI tools becoming prevalent in university settings, ChatGPT remains underutilized in certain areas. Out of 138 respondents to the question on – in which situations have you not used ChatGPT ever – 89 said that they have not used it for resume-making. Whereas only 46 said that they have not used it for assignments and projects.
As respondents of the Senior Survey are soon to enter the workforce, it is important to note that 'work-life balance' and 'a career aligned with one's skill set' have emerged as top priorities—ranking higher than financial compensation. Factors like location of posting and work culture were deemed less important.
Among 269 respondents for a question on important factors to consider when choosing career – 29.4 percent ranked work-life balance highest. Separately, over 40 percent of 262 respondents said a career aligned with their skill set was most important.
Contrary to the perception that IIT Bombay students often move away from core engineering, the survey shows that out of the 282 students who responded to the question on their immediate plans after graduation – 67 said they would continue in core engineering, while 66 planned to stay in technology.
When asked about interest in their core branch, 135 of 277 respondents said they were and still are inclined to pursue it. However, 58 said they had lost interest, while 65 admitted they were never inclined in their field of study. Whereas 19 students said that they were not interested in their core branch before but are now keen to pursue.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

'Got Humbled': Vibe Coder Caught Using AI By Boss Gets Schooled
'Got Humbled': Vibe Coder Caught Using AI By Boss Gets Schooled

NDTV

time2 minutes ago

  • NDTV

'Got Humbled': Vibe Coder Caught Using AI By Boss Gets Schooled

For a long time, writing code meant that software engineers sat long hours in front of a computer, typing out lines of instructions in a programming language. But in recent times, the rise of artificial intelligence (AI) has allowed anyone to 'vibe code', meaning the technology churns out the code after a user feeds it what they want. Now, an intern working at two places who used a similar modus operandi has revealed how the vibe conding tactic backfired for them. As per the now-viral post, the user said they were using Cursor/GPT to ship the product quickly whilst working at two companies. "I'm currently interning at 2 companies SRE at one, and SDE at a very early-stage startup (like 20 employees). At the startup, it's just me and the CTO in tech. They're funded ($5M), but super early," wrote the user in the r/developersIndia subreddit. While all was going well, the CTO of one of the companies started asking them in-depth questions about their code and this is where things turned pear-shaped. "The CTO started asking deep dive questions about the code. Stuff like, "Why did you structure it this way?" or "Explain what this function does internally." The code was mostly AI-generated, and I honestly couldn't explain parts of it properly." "He straight up told me: "I don't mind if you use AI, but you have to know what your code is doing." Then he started explaining my code to me. Bruh. I was cooked." The OP said the entire experience was 'super humbling' as he had been vibe coding without really understanding the "deeper stuff like architecture, modularisation, and writing clean, production-level code". 'How did you even...' As the post went viral, garnering hundreds of upvotes, social media users agreed with the CTO's remarks, while others questioned how the OP had landed the internship without knowing what the code meant. "I am working as QA, and you can't replace experience. You will have to learn over time. But asking questions is also a good approach. Why and how," said one user while another added: "Get to know your application's core system design. Decide your architecture which can scale in production later. Now use this as a knowledge base in Cursor/ChatGPT." A third commented: "If you can't say what that code is doing by looking at it, then how did you even get 2 internships?" A fourth said: "Best way to learn how to write clean code is reading open source project code. Hands down its the best way to learn plus have a curious mind." Notably, the term vibe coding has been popularised by Andrej Karpathy, who has worked with companies like Tesla and OpenAI.

Apple CEO Tim Cook tells staff AI is ‘ours to grab' in hourlong pep talk
Apple CEO Tim Cook tells staff AI is ‘ours to grab' in hourlong pep talk

Time of India

time31 minutes ago

  • Time of India

Apple CEO Tim Cook tells staff AI is ‘ours to grab' in hourlong pep talk

Apple Inc. Chief Executive Officer Tim Cook , holding a rare all-hands meeting following earnings results, rallied employees around the company's artificial intelligence prospects and an 'amazing' pipeline of products. The executive gathered staff at Apple's on-campus auditorium Friday in Cupertino, California, telling them that the AI revolution is 'as big or bigger' as the internet, smartphones, cloud computing and apps. 'Apple must do this. Apple will do this. This is sort of ours to grab,' Cook told employees, according to people aware of the meeting. 'We will make the investment to do it.' The iPhone maker has been late to AI, debuting Apple Intelligence months after OpenAI, Alphabet Inc.'s Google, Microsoft Corp. and others flooded the market with products like ChatGPT. And when Apple finally released its AI tools, they fell flat. But Cook struck an optimistic tone, noting that Apple is typically late to promising new technologies. 'We've rarely been first,' the executive told staffers. 'There was a PC before the Mac; there was a smartphone before the iPhone; there were many tablets before the iPad; there was an MP3 player before iPod.' But Apple invented the 'modern' versions of those product categories, he said. 'This is how I feel about AI.' An Apple spokesperson declined to comment on the gathering. The hourlong meeting addressed a range of topics, including the retirement of operating chief Jeff Williams, increasing Apple TV+ viewership and advances in health care with features like the AirPods Pro hearing-aid technology. It also touched on donations and community service by Apple employees, the company's goal to become carbon neutral by 2030, and the impact of regulations. 'The reality is that Big Tech is under a lot of scrutiny around the world,' Cook said. 'We need to continue to push on the intention of the regulation and get them to offer that up, instead of these things that destroy the user experience and user privacy and security.' Cook often holds town hall-style chats when visiting Apple's offices around the world, but companywide meetings from the Steve Jobs Theater at headquarters are unusual. The remarks followed a blockbuster earnings report, with sales growing nearly 10% during the June quarter. That beat Wall Street expectations and eased concerns about iPhone demand and a slowdown in China. Apple still faces myriad challenges, including Trump administration tariffs and a regulatory crackdown on its business practices. The company said Thursday that tariffs would bring a $1.1 billion headwind this quarter, though Apple was upbeat about sales growth. It also said that App Store revenue rose by a percentage in the double digits last quarter, despite efforts in the EU and elsewhere to further restrict that business. Echoing comments he made during the earnings conference call, Cook told employees the company is investing in AI in a 'big way.' He said 12,000 workers were hired in the last year, with 40% of the new hires joining in research and development roles. Apple's chip development efforts, led by executive Johny Srouji, are key to the company's AI strategy, Cook said. Apple is working on a more powerful cloud-computing chip — code-named Baltra — to power artificial intelligence features, Bloomberg News has reported. It's also setting up a new AI server manufacturing facility in Houston. The meeting included Craig Federighi, senior vice president of software engineering, who discussed the future of Apple's Siri voice assistant. The company had planned to roll out a Siri overhaul as part of Apple Intelligence earlier this year, adding the ability to tap into user data to better fulfill requests. It was delayed, spurring management changes for the company's AI work. Federighi explained that the problem was caused by trying to roll out a version of Siri that merged two different systems: one for handling current commands — like setting timers — and another based on large language models, the software behind generative AI. 'We initially wanted to do a hybrid architecture, but we realized that approach wasn't going to get us to Apple quality,' Federighi said. Now, Apple is working on a version of Siri that moves to an entirely new architecture for all of its capabilities. That iteration is slated for as early as spring, Bloomberg News has reported, though Apple executives haven't confirmed a timeline other than a release next year. 'The work we've done on this end-to-end revamp of Siri has given us the results we needed,' the engineering executive told employees. 'This has put us in a position to not just deliver what we announced, but to deliver a much bigger upgrade than we envisioned. There is no project people are taking more seriously.' Federighi cited leadership changes, including putting Vision Pro creator Mike Rockwell and his headset software leadership team in charge of Siri, as a driving force in improving the product. He said Rockwell and his group have 'supercharged' the company's work in the area. In his speech, Cook also pushed employees to move more quickly to weave AI into their work and future products. 'All of us are using AI in a significant way already, and we must use it as a company as well,' Cook said. 'To not do so would be to be left behind, and we can't do that.' Employees should push to deploy AI tools faster, and urge their managers and service and support teams to do the same, he said. Cook also addressed the company's retail strategy, stressing that the current plan is to focus on opening new stores in emerging markets and upping the investment in Apple's online store. The iPhone maker is opening outlets in India, the United Arab Emirates and China this year, and is preparing to add its first location in Saudi Arabia next year. 'We need to be in more countries, and you'll see us go into more emerging markets in particular,' Cook said. That doesn't mean Apple will ignore other places, he said, but a 'disproportionate amount of growth' will be in new areas. The CEO also shared his enthusiasm about upcoming products, though he didn't get specific. 'I have never felt so much excitement and so much energy before as right now,' he said. Bloomberg News has previously reported that Apple plans to launch its first foldable iPhone next year and is also working on a stream of smart home devices. New headset products, smart glasses, a push into robotics and a redesigned iPhone for the two-decade anniversary are also underway. 'The product pipeline, which I can't talk about: It's amazing, guys. It's amazing,' Cook said. 'Some of it you'll see soon, some of it will come later, but there's a lot to see.'

One small change and ChatGPT got misled, new report reveals SHOCKING truth, can cause huge damage due to...
One small change and ChatGPT got misled, new report reveals SHOCKING truth, can cause huge damage due to...

India.com

time2 hours ago

  • India.com

One small change and ChatGPT got misled, new report reveals SHOCKING truth, can cause huge damage due to...

(File) Can a minor change in parameters be used to mislead an advanced, powerful AI Chatbot like ChatGPT? Well, a recent research by Mount Sinai and Israel's Rabin Medical Center proved how even advanced artificial intelligence tools can be forced to commit basic human-like errors by simply modifying queries. What did the research reveal? During the study, the researchers made minor modifications to a few classic cases related to medical ethics and asked AI systems, including ChatGPT for answers. What they found was shocking; a majority of times the AI answers were based merely on intuitive understanding of the subject, not facts. The AI answers were based on a trait commonly called 'fast thinking', which is unique to humans. The research found that, if inputs are slightly modified, AI often gives the same answer to queries that it 'habitually' thinks is right, even if the answer is wrong and not fact-based. How researchers 'misled' AI? Various AI models like ChatGPT were asked to solve a modified version of the 'Surgeon's Dilemma'– a medical ethics puzzle which goes something like this ; A young and his father are injured in an accident. The boy is brought to the hospital, where the surgeon says, 'I can't operate on this child, he's my son'. The twist is that the surgeon is boy's mother, but this is overlooked by most because they assume the surgeon must be a man, and AI was trapped in the same fallacy. Interestingly, AI models continued to answer that the mother was the surgeon, even when the researchers told it that it was the father. The experiment demonstrated that AI 'habitually' sticks to old pattern even when new facts are explicitly provided. Why this is alarming? The study proved that AI, while a highly advanced and useful tool, cannot be entrusted to perform tasks where human lives are at stake, at least not in current state. 'AI should be used as an assistant to doctors, not as a substitute. When it comes to ethical, sensitive or serious decisions, human supervision is necessary,' said, Dr. Girish Nadkarni, a senior scientist at Mount Sinai. The groundbreaking research was inspired by Daniel Kahneman's book 'Thinking, Fast and Slow' which delves into the complex process of fast and slow thinking.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store