logo
AI misuse in Indian colleges and how professors are arresting the situation

AI misuse in Indian colleges and how professors are arresting the situation

Straits Times15-06-2025
Even as most Indian universities forbid using generative AI for assignments, students are drawn to the time-saving, modern tool. PHOTO: JAMIE KELTER DAVIS/NYTIMES
AI misuse in Indian colleges and how professors are arresting the situation
- When creative writing professor Pranav V.S. in Bengaluru congratulated his student over text about a cultural performance, he received a reply that began with : 'You can express your gratitude with a simple note or message. Here's a suggestion: Thank you so much, Sir, for your kind words...'
His heart sank.
'Can college students these days not even compose a personal thank-you message without getting help from artificial intelligence (AI) tools?' he wondered.
Prof Pranav, who teaches at St Joseph's University in Bengaluru, is not alone in his despair over the use of AI by students.
Even as most Indian universities forbid using generative AI for assignments, students are drawn to the time-saving, modern tool, many professors told The Straits Times.
They said that students write entire essays and e-mails, as well as produce presentations, using generative AI. Instead of taking class notes and reading books and technical modules, they feed the PDFs into their favourite AI tools to generate simple summaries of three or four pages.
Professors worry that students' use of AI disrupts the learning process, fosters impatience with studying and diminishes the importance of writing and reading.
Prof Pranav said it also drives a wedge in interactions between teachers and students .
'It's a shortcut for students. But for teachers, it's more laborious to separate the AI content from the students' content. The joy of teaching is gone,' said Dr Adil Hossain , who teaches history and sociology in Azim Premji University in Bengaluru.
Independent thinking is vanishing
Assistant Professor Ananya Mukherjee from Azim Premji University in Bhopal, who has been teaching biology for nine years, shared that despite her picking controversial topics like genetics and reproductive choice to encourage her students to express their genuine opinions, many students use AI tools ChatGPT and Gemini to come up with talking points for class discussions.
'Independent thinking, which is the whole point of science, is getting lost,' she said.
Assistant Professor Prem Sagar, who teaches computer applications at St Joseph's University in Bengaluru, told ST that he often faces the challenge of balancing his efforts between teaching his technical students how to build and train AI models for the future and discouraging their misuse in the present.
'AI is good at debugging errors and completing code, but when students depend on it entirely, their logical reasoning – which is what programming is all about – takes a hit,' he said.
Still, students defend their use of AI as natural and inevitable.
'What's wrong with using an efficient way to learn?' asked computer engineering graduate Tejas P.V., 22.
'AI saves time. It helps us research by locating references. For lengthy, boring 120-page documents that professors assign us, AI helps to identify the crucial 30 pages for us to focus on,' he added.
But while Mr Tejas said he used AI largely for research and expanding his 'own short points into full sentences', he admitted that he, too, has generated 'entire AI essays in high-credit subjects' that he felt were 'not important'.
Ms Keerthana S., 21, who is pursuing a bachelor's degree in environmental science in a Bengaluru college, said that 'ChatGPT is always a temptation', especially when deadlines are close.
In a group project to calculate the carbon footprint of shops within the college neighbourhood – a hyperlocal assignment her teacher had clearly crafted to force the students to avoid AI tools – Ms Keerthana attempted to use ChatGPT to generate what she called 'a cool introduction'.
'But the language was so technical, jargon-filled and so unlike my writing that I decided to write the introduction myself,' she said, adding that generative AI's high energy consumption also gives her second thoughts.
According to some estimates, interactions with AI tools such as ChatGPT could consume 10 times more electricity than a standard Google search.
Another engineering student said that he often uses AI tools to complete his programming code because, as he explained, 'even after I get a job, my bosses are not going to expect me to waste time on manually doing these basic things.'
According to English professor Greeshma Mohan, students use AI because of insecurities that their own writing and ideas are not good enough, and because AI 'sounds fancier'.
Teaching in an English-medium college in the small town of Bhopal in central India, where many students come from Hindi-medium schools, Prof Mohan said she empathised with their anxiety.
However, she is worried that 'if they didn't experiment without the use of AI and get things wrong', they would never learn.
Even after she welcomed fragmented sentences and inconsistent tenses as long as it was the students' own work, students were 'already too dependent on AI to stop using it'.
'How can I help a student whose mistakes I never see? Then what am I doing here as a teacher?' she asked.
See if you can penalise me for using AI
The fear of repercussions is often the sole deterrent to students against using AI, said many teachers.
Most major Indian universities require every instance of AI misuse to be reported , but as the scope of generative AI is still evolving, professors are also granted the flexibility to determine appropriate disciplinary measures.
Some faculties are strict and will fail a student. Some ban all gadgets in class, and assign only handwritten essays.
Others permit grammar corrections or AI-assisted research, while some require students to rewrite their essay multiple times until it is completely AI-free.
While most Indian universities use plagiarism trackers, and detecting AI use is now a function of software like Turnitin and the Indian-developed tracker DrillBit, they are not foolproof.
In November 2024, law student Kaustubh Shakkarwar sued O.P. Jindal Global University in northern India's Haryana state over being failed for allegedly using AI-generated content in an assignment on law and justice in the globalising world .
Claiming that he had done all the research himself, the student questioned the accuracy of the university's Turnitin plagiarism detection software, also powered by AI, and said it had a history of generating false positives.
The university finally issued Mr Shakkarwar a new academic transcript and revised its decision to fail him.
A practising lawyer today, he is ready to offer pro bono representation to 'any student who wants to sue their college over AI use'.
However, many professors said they have often detected AI use, even in cases where detection software had not. Common indicators included the use of em dashes, sentences beginning with 'that being said' and 'all things considered', and essays with a balance of opinions that seemed, well, artificial.
Some students also use humanising software like BypassGPT , WriteHuman and QuillBot to make AI-generated text read naturally and human-like, but many Indian students told ST that the best services were not affordable.
Most of all, teachers said they could tell if AI was used because they knew their students.
'All of a sudden, a student writes fascinating prose. Who are they kidding?' asked Bengaluru-based AMC Engineering College Professor Pallavi K.V., who now conducts oral quizzes on the students' own written assignments to determine if they have even read their AI-generated work before submission.
Indian professors are now devising assignments and pedagogical innovations to subvert the use of AI.
One anthropology lecturer asks for audio recordings of field interviews; a law professor crafts simulation exercises inspired by landmark cases; many others set live handwritten exams that students hate because they struggle to write longhand.
Dr Swathi Shivanand, who teaches historiography at the Manipal Academy of Higher Education in Bengaluru, said: 'I suppose I have more failures than successes (at weeding out AI use).'
An effective assignment she devised involved asking students to imagine a dialogue between two historical figures.
Professors suggested that the key to escaping AI is to make assignments as personal and imaginative as possible.
In Prof Pranav's writing class, during a workshop session on horror stories, including those written by AI, the standout piece was an original story set in the college with characters named after some of the students.
Ms Keerthana recalled 'a brilliant assignment' – one that few classmates used AI for – in her environmental impact assessment class, where a teacher asked them to map all the processes and components that went into making a sewing needle.
Optimal use of AI
Recognising the use of AI as inevitable, some professors are upskilling themselves to stay a few steps ahead of their students.
For instance, Assistant Professor Arpitha Jain, who teaches English at St Joseph's University in Bengaluru, said she gave her students printed copies of prescribed non-fiction readings. But, turning the tables, she used ChatGPT to generate multiple-choice questions for them to answer.
'They hated me for it, but, later, some of them applied this method (of generating short questions from long texts) to study other subjects closely,' she said.
Prof Sagar now trains other faculty in his university to use AI not just to build presentations and create lesson plans, but also to evaluate students and give more granular feedback, using data analytic tools that can notice patterns of performance and what modules someone is weak in.
Prof Pallavi, concerned that her students were unable to tell when AI content was 'wrong, biased, hallucinating or actually harmful', said she now advocates responsible, conscious use of AI.
When job recruiters used data-driven AI models for resume scanning, she showed how the technology's inherent sexist and racial biases resulted in a higher selection rate for men over women for software developer roles. Using the example of the viral Ghibli image trend, she also warned her students about uploading their photos for a moment of fun without realising how they may be putting their personal data at risk.
Dr Rahul Dass, a former journalist who now teaches at Mahindra University in Hyderabad, recently asked his class to give five different prompts to ChatGPT, Gemini and Copilot to generate an article about a major fire breaking out in a city.
'The AI outputs all described the fire, but none of the articles began with the number of people dead and injured, as a journalist would have. I want students to understand these kinds of gaps in using generative AI,' he said.
Rohini Mohan is The Straits Times' India Correspondent based in Bengaluru. She covers politics, business and human rights in the South Asian region.
Join ST's Telegram channel and get the latest breaking news delivered to you.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Passwords under threat as tech giants seek tougher security
Passwords under threat as tech giants seek tougher security

Straits Times

time4 hours ago

  • Straits Times

Passwords under threat as tech giants seek tougher security

Sign up now: Get ST's newsletters delivered to your inbox Passwords are often weak and people re-use them across different online services, said cybersecurity experts. PARIS - Fingerprints, access keys and facial recognition are putting a new squeeze on passwords as the traditional computer security method – but also running into public hesitancy. 'The password era is ending,' two senior figures at Microsoft wrote in a July blog post. The tech giant has been building 'more secure' alternatives to log in for years – and has since May been offering them by default to new users. Many other online services – such as artificial intelligence giant OpenAI's ChatGPT chatbot – require steps like entering a numerical code emailed to a user's known address before granting access to potentially sensitive data. 'Passwords are often weak and people re-use them' across different online services, said Mr Benoit Grunemwald, a cybersecurity expert with Eset. Sophisticated attackers can crack a word of eight characters or fewer within minutes or even seconds, he pointed out. And passwords are often the prize booty in data leaks from online platforms, in cases where 'they are improperly stored by the people supposed to protect them and keep them safe,' Mr Grunemwald said. Top stories Swipe. Select. Stay informed. Singapore Sengkang-Punggol LRT line back to full service: SBS Transit World AI eroded doctors' ability to spot cancer within months in study Singapore From survivable to liveable: The making of a green city World US trade team will meet Chinese officials in two or three months, Bessent says Multimedia World Photography Day: Celebrating the art of image-making Asia DPM Gan kicks off India visit in Mumbai as Singapore firms ink investment agreements Business CDL H1 profit rises 3.9% to $91.2 million; board proposes special dividend of 3 cents per share Singapore SG60: Many hands behind Singapore's success story One massive database of around 16 billion login credentials amassed from hacked files was discovered in June by researchers from media outlet Cybernews. The pressure on passwords has tech giants rushing to find safter alternatives. Tricky switchover One group, the Fast Identity Online Alliance (Fido) brings together heavyweights including Google, Microsoft, Apple, Amazon and TikTok. The companies have been working on creating and popularising password-free login methods, especially promoting the use of so-called access keys. These use a separate device like a smartphone to authorise logins, relying on a pin code or biometric input such as a fingerprint reader or face recognition instead of a password. Mr Troy Hunt, whose website Have I Been Pwned allows people to check whether their login details have been leaked online, says the new systems have big advantages. 'With passkeys, you cannot accidentally give your passkey to a phishing site' – a page that mimics the appearance of a provider such as an employer or bank to dupe people into entering their login details – he said. But the Australian cybersecurity expert recalled that the last rites have been read for passwords many times before. 'Ten years ago we had the same question... the reality is that we have more passwords now than we ever did before,' Mr Hunt said. Although many large platforms are stepping up login security, large numbers of sites still use simple usernames and passwords as credentials. The transition to an unfamiliar system can also be confusing for users. Passkeys have to be set up on a device before they can be used to log in. Restoring them if a PIN code is forgotten or trusted smartphone lost or stolen is also more complicated than a familiar password reset procedure. 'The thing that passwords have going for them, and the reason that we still have them, is that everybody knows how to use them,' Mr Hunt said. Ultimately the human factor will remain at the heart of computer security, Eset's Mr Grunemwald said. 'People will have to take good care of security on their smartphone and devices, because they'll be the things most targeted' in future, he warned. AFP

Elon Musk accuses App Store of favoring OpenAI
Elon Musk accuses App Store of favoring OpenAI

Straits Times

time5 hours ago

  • Straits Times

Elon Musk accuses App Store of favoring OpenAI

Sign up now: Get ST's newsletters delivered to your inbox Factors going into App Store rankings include user engagement, reviews, and the number of downloads. SAN FRANCISCO - Mr Elon Musk has taken his feud against OpenAI to the App Store, accusing Apple of favouring ChatGPT in the digital shop and vowing legal action. 'Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation,' Mr Musk said in a post on his social network X on Monday, without providing evidence to back his claim. 'xAI will take immediate legal action,' he added, referencing his own artificial intelligence company. X users responded by pointing out that DeepSeek AI out of China hit the top spot in the App Store early in 2025, and Perplexity AI recently ranked number one in the App Store in India. DeepSeek and Perplexity compete with OpenAI and Musk's startup xAI. Both OpenAI and xAI released new versions of their AI assistants, ChatGPT and Grok, in the past week. App Store rankings on Aug 12 listed ChatGPT as the top free iPhone app with Grok in fifth place. Apple did not immediately respond to a request for comment. Factors going into App Store rankings include user engagement, reviews, and the number of downloads. OpenAI and Apple in June 2024 announced an alliance to enhance iPhones and other devices with ChatGPT features. ChatGPT-5 rolled out free to the nearly 700 million people who use it weekly, OpenAI said in a briefing with journalists last week. Tech industry rivals Amazon, Google, Meta, Microsoft and xAI have been pouring billions of dollars into artificial intelligence since the blockbuster launch of the first version of ChatGPT in late 2022. Chinese startup DeepSeek shook up the AI sector early in 2025 with a model that delivers high performance using less costly chips. OpenAI in April filed counterclaims against multi-billionaire Musk, accusing its former co-founder of waging a 'relentless campaign' to damage the organisation after it achieved success without him. In legal documents filed at the time in northern California federal court, OpenAI alleged Mr Musk became hostile toward the company after abandoning it years before its breakthrough achievements with ChatGPT. The lawsuit was another round in a bitter feud between the generative AI (genAI) start-up and the world's richest man, who sued OpenAI in 2024, accusing the company of betraying its founding mission. In its countersuit, the company alleged Mr Musk 'made it his project to take down OpenAI, and to build a direct competitor that would seize the technological lead – not for humanity but for Elon Musk'. Mr Musk founded his own genAI startup, xAI, in 2023 to compete with OpenAI and the other major AI players. AFP

AI eroded doctors' ability to spot cancer within months in study
AI eroded doctors' ability to spot cancer within months in study

Straits Times

time5 hours ago

  • Straits Times

AI eroded doctors' ability to spot cancer within months in study

Sign up now: Get ST's newsletters delivered to your inbox A researcher predicted that the effects of de-skilling will 'probably be higher' as AI becomes more powerful. LONDON – Artificial intelligence, touted for its potential to transform medicine, led to some doctors losing skills after just a few months in a new study. AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumours dropped by about 20 per cent compared with rates before the tool was ever introduced, according to findings published on Aug 13. Health-care systems around the world are embracing AI with a view to boosting patient outcomes and productivity. The UK government in 2025 announced £11 million (S$19.06 million) in funding for a new trial to test how AI can help catch breast cancer earlier. The AI in the study probably prompted doctors to become over-reliant on its recommendations, 'leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,' the scientists said in the paper. They surveyed four endoscopy centres in Poland and compared detection success rates three months before AI implementation and three months after. Some colonoscopies were performed with AI and some without, at random. The results were published in The Lancet Gastroenterology and Hepatology journal. Professor Yuichi Mori, a researcher at the University of Oslo and one of the scientists involved, predicted that the effects of de-skilling will 'probably be higher' as AI becomes more powerful. What's more, the 19 doctors in the study were highly experienced, having performed more than 2,000 colonoscopies each. The effect on trainees or novices might be starker, said Dr Omer Ahmad, a consultant gastroenterologist at University College Hospital London. 'Although AI continues to offer great promise to enhance clinical outcomes, we must also safeguard against the quiet erosion of fundamental skills required for high-quality endoscopy,' Dr Ahmad, who wasn't involved in the research, wrote a comment alongside the article. A study conducted by MIT in 2025 raised similar concerns after finding that using OpenAI's ChatGPT to write essays led to less brain engagement and cognitive activity. BLOOMBERG

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store