
AI gets a seat in IITs and IIMs, guidelines here
"There's no denying AI is here to stay, and the real question is not if it should be used, but how. Students are already using it to support their learning, so it's vital they understand both its strengths and its limits, including ethical concerns and the cognitive cost of over-reliance," said Professor Dr Srikanth Sugavanam, IIT Mandi, responding to a question to India Today Digital."Institutions shouldn't restrict AI use, but they must set clear guardrails so that both teachers and students can navigate it responsibly," he further added.INITIATIVE BY IIT DELHIIn a changing but firm step, IIT Delhi has issued guidelines for the ethical use of AI by students and faculty. The institute conducted an internal survey before framing them. What they found was striking.Over 80 percent of students admitted to using tools like ChatGPT, GitHub Copilot, Perplexity AI, Claude, and Chatbots.On the other hand, more than half the faculty members said they too were using AI -- some for drafting, some for coding, some for academic prep.The new rules are not about banning AI. It is more about drawing a line that says: use it, but don't outsource your thinking.ON CAMPUS, A SHIFT IS UNDERWAYAt IIM Jammu, students say the policy is strict: no more than 10 percent AI use is allowed in any assignment.One student put it simply: "We're juggling lectures, committees, and eight assignments in three months. Every day feels like a new ball added to the juggling act. In that heat, AI feels like a bit of rain."They're not exaggerating. There are tools now that can read PDFs aloud, prepare slide decks, even draft ideas. The moment you're stuck, you can 'chat' your way out. The tools are easy, accessible, and, for many, essential.advertisementBut here's the other side: some students now build their entire workflow around AI. They use AI to write, AI to humanise, AI to bypass AI detectors."Using of plagiarism detection tools, like Turnitin, which claim to detect the Gen-AI content. However, with Gen-AI being so fast evolving, it is difficult for these tools to keep up with its pace. We don't have a detailed policy framework to clearly distinguish between the ethical and lazy use of Gen-AI," said Prof Dr Indu Joshi, IIT Mandi.
NOT WHAT AI DOES, BUT WHAT IT REPLACESAt IIM Sambalpur, the administration isn't trying to hold back AI. They're embracing it. The institute divides AI use into three pillars:Cognitive automation - for tasks like writing and codingCognitive insight - for performance assessmentCognitive engagement - for interaction and feedbackStudents are encouraged to use AI tools, but with one condition: transparency. They must declare their sources. If AI is used, it must be cited. Unacknowledged use is academic fraud.advertisement"At IIM Sambalpur, we do not prohibit AI tools for research, writing, or coding. We encourage students to use technology as much as possible to enhance their performance. AI is intended to help enhance, not shortcut," IIM Sambalpur Director Professor Mahadeo Jaiswal told India Today.But even as tools evolve, a deeper issue is emerging: Are students losing the ability to think for themselves?MIT's recent research says yes, too much dependence on AI weakens critical thinking.It slows down the brain's ability to analyse, compare, question, and argue. And these are the very skills institutions are supposed to build."AI has levelled the field. Earlier, students in small towns didn't have mentors or exposure. Now, they can train for interviews, get feedback, build skills, all online. But it depends how you use it," said Samarth Bhardwaj, an IIM Jammu student.TEACHERS ARE UNDER PRESSURE TOOThe faculty are not immune any more. AI is now turning mentor and performing stuff that even teachers cannot do. With AI around, teaching methods must change.The old model -- assign, submit, grade -- works no more. Now, there's a shift toward 'guide on the side' teaching.advertisementLess lecture, more interaction. Instead of essays, group discussions. Instead of theory, hackathons.It is all about creating real-world learning environments where students must think, talk, solve, and explain why they did what they did. AI can assist, but not answer for them.SO, WHERE IS THE LINE?There's no clear national rule yet. But the broad consensus across IITs and IIMs is this:AI should help, not replace.Declare what you used.Learn, don't just complete.Experts like John J Kennedy, former dean at Christ University, say India needs a forward-looking framework.Not one that fears AI, but one that defines boundaries, teaches ethics, and rewards original thinking.Today's students know they can't ignore AI. Not in tier-1 cities. Not in tier-2 towns either.Institutions will keep debating policies. Tools will keep evolving. But for students, and teachers, the real test will be one of discipline, not access. Of intent, not ability.Because AI can do a lot. But it cannot ask the questions that matter.- EndsMust Watch
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Scroll.in
14 minutes ago
- Scroll.in
As young Indians turn to AI ‘therapists', how confidential is their data?
This is the second of a two-part series. Read the first here. Imagine a stranger getting hold of a mental health therapist's private notes – and then selling that information to deliver tailored advertisements to their clients. That's practically what many mental healthcare apps might be doing. Young Indians are increasingly turning to apps and artificial intelligence-driven tools to address their mental health challenges – but have limited awareness about how these digital tools process user data. In January, the Centre for Internet and Society published a study based on 45 mental health apps – 28 from India and 17 from abroad – and found that 80% gathered user health data that they used for advertising and shared with third-party service providers. An overwhelming number of these apps, 87%, shared the data with law enforcement and regulatory bodies. The first article in this series had reported that some of these apps are especially popular with young Indian users, who rely on them for quick and easy access to therapy and mental healthcare support. Users had also told Scroll that they turned to AI-driven technology, such as ChatGPT, to discuss their feelings and get advice, however limited this may be compared to interacting with a human therapist. But they were not especially worried about data misuse. Keshav*, 21, reflected a common sentiment among those Scroll interviewed: 'Who cares? My personal data is already out there.' The functioning of Large Language Models, such as ChatGPT, is already under scrutiny. LLMs are 'trained' on vast amounts of data, either from the internet or provided by its trainers, to simulate human learning, problem solving and decision making. Sam Altman, CEO of OpenAI that built ChatGPT, said on a podcast in July that though users talk about personal matters with the chatbot, there are no legal safeguards protecting that information. 'People use it – young people, especially, use it – as a therapist, a life coach; having these relationship problems and [asking] what should I do?' he asked. 'And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT.' Play He added: 'So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up.' Therapists and experts said the ease of access of AI-driven mental health tools should not sideline privacy concerns. Clinical psychologist Rhea Thimaiah, who works at Kaha Mind, a collective that provides mental health services, emphasised that confidentiality is an essential part of the process of therapy. 'The therapeutic relationship is built on trust and any compromise in data security can very possibly impact a client's sense of safety and willingness to engage,' she said. 'Clients have a right to know how their information is being stored, who has access, and what protections are in place.' This is more than mere data – it is someone's memories, trauma and identity, Thimaiah said. 'If we're going to bring AI into this space, then privacy shouldn't be optional, it should be fundamental.' Srishti Srivastava, founder of AI-driven mental health app Infiheal, said that her firm collects user data to train its AI bot, but users can access the app even without signing up and also ask for their data to be deleted. Dhruv Garg, a tech policy lawyer at Indian Governance and Policy Project, said the risk lies not just in apps collecting data but in the potential downstream uses of that information. 'Even if it's not happening now, an AI platform in the future could start using your data to serve targeted ads or generate insights – commercial, political, or otherwise – based on your past queries,' said Garg. 'Current privacy protections, though adequate for now, may not be equipped to deal with each new future scenario.' India's data protection law For now, personal data processed by chatbots is governed by the Information Technology Act framework and Sensitive Personal Data Rules, 2011. Section 5 of the sensitive data rules says that companies must obtain consent in writing before collecting or using sensitive information. According to the rules, information relating to health and mental health conditions are considered sensitive data. There are also specialised sectoral data protection rules that apply to regulated entities like hospitals. The Digital Personal Data Protection Act, passed by Parliament in 2023, is expected to be notified soon. But it exempts publicly available personal data from its ambit if this information has voluntarily been disclosed by an individual. Given the black market of data intermediaries that publish large volumes of personal information, it is difficult to tell what personal data in the public domain has been made available 'voluntarily'. The new data protection act does not have different regulatory standards for specific categories of personal data – financial, professional, or health-related, Garg said. This means that health data collected by AI tools in India will not be treated with special sensitivity under this framework. 'For instance, if you search for symptoms on Google or visit WebMD, Google isn't held to a higher standard of liability just because the content relates to health,' said Garg. WebMD provides health and medical information. It might be different for AI tools explicitly designed for mental healthcare – unlike general-purpose models like ChatGPT. These, according to Garg, 'could be made subject to more specific sectoral regulations in the future'. However, the very logic on which AI chatbots function – where it responds based on user data and inputs – could itself be a privacy risk. Nidhi Singh, a senior research analyst and programme manager at Carnegie India, an American think tank, said she has concerns about how tools like ChatGPT customise responses and remember user history – even though users may appreciate those functions. Singh said India's new data protection is quite clear that any data made publicly available by putting it on the internet is no longer considered personal data. 'It is unclear how this will apply to your conversations with ChatGPT,' she said. Without specific legal protections, there's no telling how an AI-driven tool will use the data it has gathered. According to Singh, without a specific rule designating conversations with generative AI as an exception, it is likely that a user's interactions with these AI systems won't be treated as personal data and consequently will not fall under the purview of the act. Who takes legal responsibility? Technology firms have tried hard to evade legal liability for harm. In Florida, a lawsuit by a mother has alleged that her 14-year-old son died by suicide after becoming deeply entangled in an 'emotionally and sexually abusive relationship' with a chatbot. In case of misdiagnosis or harmful advice from an AI tool, legal responsibility is likely to be analysed in court, said Garg. 'The developers may argue that the model is general-purpose, trained on large datasets, and not supervised by a human in real-time,' said Garg. 'Some parallels may be drawn with search engines – if someone acts on bad advice from search results, the responsibility doesn't fall on the search engine, but on the user.' Highlighting the urgent need for a conversation on sector-specific liability frameworks, Garg said that for now, the legal liability of AI developers will have to be assessed on a case-to-case basis. 'Courts may examine whether proper disclaimers and user agreements were in place,' he said. In another case, Air Canada was ordered to pay compensation to a customer who was misled by its chatbot regarding bereavement fares. The airline had argued that the chatbot was a ' separate legal entity ' and therefore responsible for its own actions. Singh of Carnegie India said that transparency is important and that user consent should be meaningful. 'You don't need to explain the model's source code, but you do need to explain its limitations and what it aims to do,' she said. 'That way, people can genuinely understand it, even if they don't grasp every technical step.' AI, meanwhile, is here for the long haul. Until India can expand its capacity to offer mental health services to everyone, Singh said AI will inevitably fill that void. 'The use of AI will only increase as Indic language LLMs are being built, further expanding its potential to address the mental health therapy gap,' she said.


Mint
14 minutes ago
- Mint
OpenAI CEO Sam Altman warns ChatGPT users of ‘capacity crunches' ahead of GPT-5 launch: ‘Bear with us'
OpenAI CEO Sam Altman has asked ChatGPT users for some patience as the company's upcoming feature launches and new model releases could lead to some 'probable hiccups and capacity crunches'. While Altman did not clarify about which exact new models he was talking about, the OpenAI CEO has earlier announced that the company is looking to launch its state of the art GPT-5 model soon. Meanwhile, OpenAI is also looking to release its first ever open-weights model this month as well. Making the announcement in a post on X (formerly Twitter), Altman wrote, 'we have a ton of stuff to launch over the next couple of months--new models, products, features, and more. please bear with us through some probable hiccups and capacity crunches. although it may be slightly choppy, we think you'll really love what we've created for you!' A recent report from The Verge had revealed that OpenAI could release its GPT-5 model in early August. The new model will be the first ever LLM by the ChatGPT maker to come with unified reasoning capabilities, meaning users will not be asked to choose the reasoning model from the model picker for higher reasoning tasks. Instead, the new GPT-5 model will unify the o series and GPT series model by knowing when to think for a long itme or not and 'generally be useful for a very wide range of tasks'. In a recent podcast Altman teased new capabilites of GPT-5 noting that the model helped him answer a difficult email that he should have been able to do but couldn't. Speaking about GPT-5's capabilities Altman noted, "I was testing our new model and I got a question. I got emailed a question that I didn't quite understand. Uh, and I put it in the model, this GPT5, and it answered it perfectly and I really kind of sat back in my chair and I was just like a, oh man, here it is moment and I got over it quickly. 'I felt like useless relative to the AI in this thing that I felt like I should have been able to do and I couldn't and it was really hard, but the AI just did it like that.' he added GPT-5 free tier users will get unlimited chats at the standard intelligence setting while Plus subscribers will get the ability to run GPT-5 at a 'higher level of inelligence' and Pro subscribers will be able to run the latest model at an 'even higher level of intelligence', Altman had said earlier in the year.


Time of India
44 minutes ago
- Time of India
Brands press enter GEO to show up more in AI searches
Data suggests that LLM-based searches are likely to outpace plain vanilla Google searches by 2028. MyMuse is seeing a 10% increase in its monthly searches on ChatGPT since it started focusing on GEO. Softly offers GEO services to B2B, B2C and D2C companies. Asva AI is another startup helping companies improve their presence on these models. Industry experts are of the view that LLMs are particularly well suited for specific user queries and private information-seeking behaviour. Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads Brands are having to rethink traditional search engine optimisation ( SEO ) strategies to ensure visibility. Enter GEO. As users turn to AI chatbots for queries and traditional search engines like Google become increasingly AI-powered, startups such as Asva AI and Siftly are offering generative engine optimisation (GEO) and large language model (LLM) optimisation services to tweaks content to improve visibility on AI-powered search engines and generative AI models. Unlike conventional search engines, which rely on keywords, LLMs respond to prompts and generate curated answers. This is prompting brands to rethink their search engine optimisation (SEO) strategies to ensure visibility in AI responses . For instance, intimacy wellness brand MyMuse said it's seeing a 10% increase in its monthly searches on ChatGPT since it started focusing on a Y Combinator-backed startup founded in 2021, offers GEO services to business-to-business, business-to-customer and direct-to-customer companies.'Every LLM relies on some form of search engine under the hood,' said Chalam PVS, cofounder and CEO of Siftly. 'When a user enters a prompt, the model often queries multiple search engines in real time, scans the results, interprets the content, and then summarises it, all within a few seconds.'He added that Siftly has analysed thousands of prompts and found that ChatGPT's results overlap only 61% with Google and 68% with Bing. 'To consistently show up on LLMs like ChatGPT, Perplexity and Gemini, you need platform-specific strategies — traditional SEO alone doesn't cut it,' he platforms such as ChatGPT, Google Gemini, Claude and Perplexity, brands gain visibility in two ways: through the actual answer generated and through the sources cited in those Asva AI is another startup helping companies improve their presence on these models.'We help brands get discovered, understand their LLM traffic, and recommend strategies on how they can improve it,' said Viren Inaniyan, cofounder of Asva are increasingly turning to LLMs because they want curated, direct responses instead of long lists of links. 'For instance, if users search for travel planning on ChatGPT, it will suggest flights, hotels and restaurants. All brands in these categories now want to be cited in the model's answer,' he brands are not charged for visibility on LLM platforms, but Inaniyan expects monetisation to start suggests that LLM-based searches are likely to outpace plain vanilla Google searches by 2028. Google's AI Overview feature now has over 2 billion monthly users, the company said in its June quarter earnings call. This growing adoption of generative AI is prompting companies to prepare for an AI-led discovery environment.'Many people still use Google search in India, but with the AI Overview feature giving a summary of the search query, most users don't scroll below,' said Aquibur Rahman, founder and CEO of Mailmodo, an email marketing platform. 'We are seeing an increase in search impressions but the click rate is decreasing.' In the last six months, Mailmodo has seen a 15% decline in clicks from Google search. To tackle this, Rahman has started optimising his website for wellness brand Kerala Ayurveda is now working to show up in AI-powered search results.'We have started working on GEO over the past couple of months and in the last two months, our traffic from ChatGPT has increased by 2.5x,' said chief product and tech officer Utkarsh experts are of the view that LLMs are particularly well suited for specific user queries and private information-seeking behaviour.'There are a lot of questions around intimacy products — how to use them, how to carry them, etc. Now because users come to AI chatbots with a lot of queries like this. We think there is scope for brands like ours to pop up,' said Sahil Gupta, CEO of the momentum behind GEO, challenges remain.'It's important to understand that ChatGPT, Perplexity and similar platforms don't provide any data around click-through rates like Google search does,' said Inaniyan of Asva means companies are often operating on guesswork, unlike in the traditional SEO era, when keyword rankings and traffic analytics helped guide added that users currently rely on AI chatbots mainly for information, rather than as gateways to external websites.'Redirection isn't happening on these platforms because users receive the answers they need and then go elsewhere to make purchases,' he for brands navigating a shift in user behaviour from link-driven exploration to prompt-driven discovery, learning to optimise for LLMs is fast becoming essential. While the GEO playbook is still being written, startups and early adopters believe it could define the next wave of online visibility.