logo
Revealed: Thousands of UK university students caught cheating using AI

Revealed: Thousands of UK university students caught cheating using AI

The Guardian11 hours ago

Thousands of university students in the UK have been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism show a marked decline, a Guardian investigation can reveal.
A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.
Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts.
The data highlights a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.
In 2019-20, before the widespread availability of generative AI, plagiarism accounted for nearly two-thirds of all academic misconduct. During the pandemic, plagiarism intensified as many assessments moved online. But as AI tools have become more sophisticated and accessible, the nature of cheating has changed.
The survey found that confirmed cases of traditional plagiarism fell from 19 per 1,000 students to 15.2 in 2023-24 and is expected to fall again to about 8.5 per 1,000, according to early figures from this academic year.
The Guardian contacted 155 universities under the Freedom of Information Act requesting figures for proven cases of academic misconduct, plagiarism and AI misconduct in the last five years. Of these, 131 provided some data – though not every university had records for each year or category of misconduct.
More than 27% of responding universities did not yet record AI misuse as a separate category of misconduct in 2023-24, suggesting the sector is still getting to grips with the issue.
Many more cases of AI cheating may be going undetected. A survey by the Higher Education Policy Institute in February found 88% of students used AI for assessments. Last year, researchers at the University of Reading tested their own assessment systems and were able to submit AI-generated work without being detected 94% of the time.
Dr Peter Scarfe, an associate professor of psychology at the University of Reading and co-author of that study, said there had always been ways to cheat but that the education sector would have to adapt to AI, which posed a fundamentally different problem.
He said: 'I would imagine those caught represent the tip of the iceberg. AI detection is very unlike plagiarism, where you can confirm the copied text. As a result, in a situation where you suspect the use of AI, it is near impossible to prove, regardless of the percentage AI that your AI detector says (if you use one). This is coupled with not wanting to falsely accuse students.
'It is unfeasible to simply move every single assessment a student takes to in-person. Yet at the same time the sector has to acknowledge that students will be using AI even if asked not to and go undetected.'
Students who wish to cheat undetected using generative AI have plenty of online material to draw from: the Guardian found dozens of videos on TikTok advertising AI paraphrasing and essay writing tools to students. These tools help students bypass common university AI detectors by 'humanising' text generated by ChatGPT.
Dr Thomas Lancaster, an academic integrity researcher at Imperial College London, said: 'When used well and by a student who knows how to edit the output, AI misuse is very hard to prove. My hope is that students are still learning through this process.'
Harvey* has just finished his final year of a business management degree at a northern English university. He told the Guardian he had used AI to generate ideas and structure for assignments and to suggest references, and that most people he knows used the tool to some extent.
'ChatGPT kind of came along when I first joined uni, and so it's always been present for me,' he said. 'I don't think many people use AI and then would then copy it word for word, I think it's more just generally to help brainstorm and create ideas. Anything that I would take from it, I would then rework completely in my own ways.
'I do know one person that has used it and then used other methods of AI where you can change it and humanise it so that it writes AI content in a way that sounds like it's come from a human.'
Amelia* has just finished her first year of a music business degree at a university in the south-west. She said she had also used AI for summarising and brainstorming, but that the tools had been most useful for people with learning difficulties. 'One of my friends uses it, not to write any of her essays for her or research anything, but to put in her own points and structure them. She has dyslexia – she said she really benefits from it.'
The science and technology secretary, Peter Kyle, told the Guardian recently that AI should be deployed to 'level up' opportunities for dyslexic children.
Technology companies appear to be targeting students as a key demographic for AI tools. Google offers university students a free upgrade of its Gemini tool for 15 months, and OpenAI offers discounts to college students in the US and Canada.
Lancaster said: 'University-level assessment can sometimes seem pointless to students, even if we as educators have good reason for setting this. This all comes down to helping students to understand why they are required to complete certain tasks and engaging them more actively in the assessment design process.
'There's often a suggestion that we should use more exams in place of written assessments, but the value of rote learning and retained knowledge continues to decrease every year. I think it's important that we focus on skills that can't easily be replaced by AI, such as communication skills, people skills, and giving students the confidence to engage with emerging technology and to succeed in the workplace.'
A government spokesperson said it was investing more than £187m in national skills programmes and had published guidance on the use of AI in schools.
They said: 'Generative AI has great potential to transform education and provides exciting opportunities for growth through our plan for change. However, integrating AI into teaching, learning and assessment will require careful consideration and universities must determine how to harness the benefits and mitigate the risks to prepare students for the jobs of the future.'
*Names have been changed.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Easy 30-second trick could stop you being conned out of £1,000s – but you'll need to call your family today
Easy 30-second trick could stop you being conned out of £1,000s – but you'll need to call your family today

The Sun

time2 hours ago

  • The Sun

Easy 30-second trick could stop you being conned out of £1,000s – but you'll need to call your family today

A DEVASTATING scam that can cost you thousands of pounds in seconds can easily be blocked – it just takes 30 seconds. The only catch is that you'll want to get ready for it in advance. One of the most common text-based scams out there right now is called the "Hi Mum" con. It's also known as the "friend in need" scam, and attempts to trick unsuspecting parents into handing over cash quickly. A crook will pose as your child and convince you to send cash in an emergency – but it's a total con. Thankfully, there's a very easy way to scupper it and keep your cash safe. WHAT IS THE SCAM? City of London Police's Action Fraud issued a major warning about the scam back in 2022, revealing that it had already cost users £1.5 million. "Criminals will typically claim to be a family member and will usually begin the conversation with 'Hello Mum' or 'Hello Dad'," Action Fraud explained. "They will say that they are texting from a new mobile number as their phone was lost or damaged and will go on to ask for money to purchase a new phone, or claim that they need money urgently to pay a bill. "The criminal will supply their bank details for payment, with some coming back on multiple occasions until the victim realises they've been scammed." Since then, the scam has only grown in popularity. In 2023, McAfee issued an alert warning of a "new threat" were crooks were using AI to carry out more convincing versions of the scam. Deepfakes more 'sophisticated' and dangerous than ever as AI expert warns of six upgrades that let them trick your eyes This even involved using AI voice-cloning tech to make the con work over phone calls. "With inexpensive and easy-to-use AI voice-cloning tools, fraudsters are delivering bespoke messages through calls or voicemails and fraudulently asking for help in the form of financial assistance," McAfee warned. "In the past, those wishing to create these assets needed to have both the time and technical ability to replicate voices. "But applications and tools that produce near instantaneous, highly believable results are now just a few clicks away." 3 The scam can turn up on chat apps like WhatsApp or iMessage, via email, or even over the phone using an AI-cloned voice. And it can be highly convincing: Brits have already lost millions to the con, with thousands sometimes being taken in a single attack. 'If you receive a message like this from a friend or family member, don't send any money until you've had a chance to call them and confirm their identity," said Detective Chief Inspector Craig Mullish. "Taking a moment to stop and think before parting with your money or information could keep you safe." ACTION FRAUD – THE OFFICIAL ADVICE Action Fraud advises that the public follow the advice of the Take Five to Stop Fraud campaign to keep themselves safe from fraud. Stop: Taking a moment to stop and think before parting with your money or information could keep you safe. Challenge: Could it be fake? It's okay to reject, refuse or ignore any requests. Only criminals will try to rush or panic you. Protect: If you think you've been a victim of fraud, contact your bank immediately and report it to Action Fraud online at or by calling 0300 123 2040. How to report You can report suspicious text messages you have received but not acted upon, by forwarding the original message to 7726, which spells SPAM on your keypad. You can report suspicious emails you have received but not acted upon, by forwarding the original message to report@ If you have provided personal or financial details as a result of a suspicious message, or lost money because of a scam, you should report it to Action Fraud at or by calling 0300 123 2040 HOW TO STAY SAFE – THE QUICK TRICK Of course, if a crook is using voice cloning or has taken over the family member's phone, checking with them through another method might not work. So that's where the clever trick comes in. You'll want to call your son or daughter (or mum or dad) today and set up a safe word or phrase. It can be anything really – like a fruit, a place, or a date – but not something obvious like your birthday or hometown. 3 Then whenever one of you ask's the other for money, you'll demand the safe word to verify the identity. Scammers will have no one of knowing this, as long as you haven't written it down somewhere on your devices. And it's an instant way to prevent the scam from going any further. It only takes half a minute to set up, and could save you from being conned out of thousands of pounds. So give your family members a call today and make sure you have safe words set up.

Children's Commissioner attacks Musk for letting children view porn on X
Children's Commissioner attacks Musk for letting children view porn on X

Telegraph

time3 hours ago

  • Telegraph

Children's Commissioner attacks Musk for letting children view porn on X

Sir Keir Starmer's Children's Commissioner has attacked Elon Musk for allowing children to watch porn on X. Dame Rachel de Souza has called on the billionaire owner of the social media site to 'act like a father' after it was revealed that children are most likely to view pornographic content on X. Speaking in an interview series by anti-porn charity the Naked Truth Project, Dame Rachel attacked the world's richest man, encouraging him to 'think about children'. 'Look, frankly, Elon Musk, who owns X, is a father. Be [and] act like a father. Think about children,' she told the Naked Truth Project's Ian Henderson. 'Children need our protection. I want Apple, I want X, I want all the social media companies where children see these things to take it seriously. They are global companies and they should do better,' Dame Rachel said. The commissioner's 2023 study into young people and pornography revealed that 41 per cent of young people have encountered pornography on Twitter, surpassing dedicated pornography sites (37 per cent), Instagram (33 per cent) and Snapchat (32 per cent). X introduced age verification measures in 2024 but these can easily be bypassed and fail to offer real protection. The commissioner's office is carrying out work to update her study and reflect children's experiences today, to be published later in 2025. 'We want young people to be able to enjoy the internet, enjoy apps, never have to be bored again on a Sunday afternoon or not find something in a good, sensible way,' she said. 'Instead I'm still hearing heinous stories about what children are seeing on there and I find the tech companies absolutely morally culpable. They are not doing what they should.' The Government's online safety act, which followed this newspaper's campaign for a duty of care on firms to protect children, was introduced in October 2023. It requires porn companies to bar under-18s from their sites and tech giants to put in place tough age checks to prevent children accessing such content. But Dame Rachel said that she is not satisfied with social media companies' compliance with the legislation. 'The Online Safety Bill is clear. They should be protecting children,' she said. 'They know how old the children are on their sites. They've got the technology. 'They should have proper, strict age verification. 'They should be answering children's complaints. They should be taking down harmful material. 'And when I ask them why they're not, they often mumble something about resources or say 'we're doing better'. It's just not good enough. These are our children. This is the law in this country and I want to see a robust approach from government.' The Children's Commissioner is also working on the rise of deepfake and ' nudifying' apps, pointing to research that one in four girls has been shown a deepfake image of themselves, a friend or a teacher. She said that 25 per cent of girls that they had spoken to had either seen themself, a friend or a teacher on nudifying apps. 'It spreads like wildfire with the young,' she said. 'Girls are telling me they're as afraid of these apps as they are of walking home on a dark night because they are terrified of seeing themselves shown naked and passed round. 'It's illegal to make pornographic images of children. Boys who are making these images are committing an illegal act. But the apps that enable this aren't illegal yet – and they should be.'

AI could lead to more job cuts at BT, says chief executive
AI could lead to more job cuts at BT, says chief executive

The Guardian

time4 hours ago

  • The Guardian

AI could lead to more job cuts at BT, says chief executive

The chief executive of BT has said that advances in artificial intelligence could presage deeper jobs cuts at the FTSE 100 telecoms company, which has already outlined plans to shed up to 55,000 workers. Two years ago, the company said that between 40,000 and 55,000 jobs would be axed as it set out to become a 'leaner' business by the end of the decade. However, in a weekend interview, its chief executive, Allison Kirkby, said the plan, which includes stripping out £3bn of costs, 'did not reflect the full potential of AI'. 'Depending on what we learn from AI … there may be an opportunity for BT to be even smaller by the end of the decade,' Kirkby said in an interview with the Financial Times. BT, which is the biggest broadband provider in the country, laid out plans in 2023 to cut the size of its workforce, including contractors, by 2030. Philip Jansen, who was chief executive at the time, said the company could rely on a much smaller workforce and cost base by the end of the decade. Kirkby, who took over from Jansen last year, has pushed for the company to streamline its operations – selling its Italian business and its Irish wholesale and enterprise unit – and focus more on improving in the UK. Last month, BT spun off its international business into a separate division, but is reportedly open to offers for this area of the business, according to the FT, which cited a person familiar with the matter. Kirkby also said she did not think the value of BT's broadband network business Openreach was reflected in its share price. If this continued, BT 'would absolutely have to look at options'. The 'time to reconsider' whether to spin off the business would take place once it has completed upgrading its network to full fibre, she said. However, Kirkby said her preference would be for the BT share price to reflect the worthof Openreach rather than to spin it off. Sign up to Business Today Get set for the working day – we'll point you to all the business news and analysis you need every morning after newsletter promotion It emerged last week that BT was weighing up a potential takeover of the telecoms and broadband company TalkTalk. Its smaller rival has about 3.2 million customers, although it has struggled since it was taken private by Toscafund, a London-based investment firm, in a £1.1bn deal that added £527m of debt to its balance sheet in 2021.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store