2 days ago
If there's no escaping AI, should students even try?
If you listen to the optimists, you'll hear that artificial intelligence will give everyone a free friend, therapist or even doctor.
But unlike the search engines that came before, large language models (LLMs) such as ChatGPT can also produce highly customised answers to nuanced and specific questions. For all of the possible benefits, the 'expert in your pocket' is therefore also a huge problem for academic integrity.
Almost nine out of ten undergraduates already use AI to help on their assessments, a survey found this year, while a 2024 study revealed that, on average, chatbots received higher grades than real undergraduates. With LLM-detectors widely seen as unfit for purpose, many therefore suspect cheating is now rife.
Some pedagogists believe pen-and-paper tests, like GCSE exams, are the only credible way to ensure students still learn the content. Others, however, think coursework and at-home exams remain crucial to encourage responsible AI use.
• AI cheating surges at universities
'I am certain that most of my students use LLMs at some point to solve coding problems,' Professor Stephane Wolton says. 'But for me that is fine because in their real job they are going to use it too. Why would I forbid them from doing something they are going to do later?'
Wolton, who teaches political science at the London School of Economics (LSE), is among the first professors in the country to experiment with an innovative assessment design — one that he thinks will likely catch on.
For the final quarter of their grade, his students produce a short essay using an LLM. They are not assessed on the quality of that answer but instead on a justification of their prompts and a 1,000-word critique of the response.
This year Wolton asked postgraduates to see what a chatbot had to say about the claim that 'only a strong autocratic regime' could effectively tackle climate change. The marks, he says, were for explaining how and why the AI answer was unsatisfactory.
'Everybody is saying it's a revolution and I tend to believe that you cannot fight progress. I don't want us to be Luddites but to actually embrace this thing,' Wolton says. 'We don't know exactly how LLMs are going to be used in future, but they are going to be used — I want my students to have a headstart.'
Wolton says that in-person exams remain important to check students learn the core concepts, but, drawing a comparison with the early internet era, he adds that it is also crucial to teach students to think critically about AI-generation.
'From a pedagogical perspective this is quite important. My goal is to get them to use this tool intelligently. I was surprised by how many students, probably two thirds, didn't think properly about how to use these tools,' he says. 'It's not changing the philosophy, it's changing what this philosophy is applied to.'
Most students appreciated his novel assessment method, according to Sophia Moore, an undergraduate who took one of his classes. 'It actually engages with the role of AI in academia instead of simply ignoring it,' she says.
'The way we write, think and learn is changing, as do the expectations in the job market — assessments should change too,' agrees Valeria Schell, a postgraduate student. 'The traditional model of cramming facts, writing them out under time pressure, and then forgetting it all a week later feels outdated.'
Schell was especially frustrated at classes that, in her view, had ignored the technological improvements of the past few years. 'Right now, students are often already using AI — just quietly. So instead of rewarding those who break the rules, do it skilfully and hide it well, why not make it part of the assignment?' she says.
Professor Christopher Tucci, who teaches digital strategy and innovation at Imperial College London, also updated his course shortly after the launch of ChatGPT in 2022.
Seeing how a colleague generated a book draft using an LLM, Tucci was convinced that bot-generated content wasn't going anywhere. Students will need to understand their strengths and weaknesses in the real world, he argues, meaning they should be included at university.
An outright ban is 'a ship that sailed a long time ago', he adds. 'The most honest ones are not going to use it but everyone else is. That's going to put incredible pressure on them because it will be difficult to keep up.'
However, others push back against integrating AI into student assessments by pointing to potential drawbacks.
• 'An existential crisis': can universities survive ChatGPT?
Despite using LLMs to help with translation and debugging code, Angelo Pirrone, a psychology lecturer at the University of Liverpool, doesn't think academics outside of computer science should use valuable classroom time teaching their use.
'I don't buy into the idea that we should embrace whatever innovation comes our way. It seems to me that students are getting worse by the year so old school teaching and learning — reading, writing, having conversations — goes a long way,' Pirrone says.
'I feel we could have students engage with much better material. Should we succumb as a consequence of endemic cheating to this type of AI-centred learning? I don't think so.'
Some students on Wolton's course privately agree. One, speaking anonymously, expresses frustration that they had spent hours researching LLM techniques rather than political economy.
Even Wolton himself cautions that time would run out for his method if the models become good enough to convincingly critique themselves. But the future is 'frightening and exciting', he believes — so perhaps we should try to keep up.
Which is the best university in the UK? See the definitive university rankings, get expert advice on your application and more in The Sunday Times Good University Guide