logo
Nearly 7,000 UK University Students Caught Cheating Using AI: Report

Nearly 7,000 UK University Students Caught Cheating Using AI: Report

NDTV8 hours ago

Nearly 7,000 university students in the UK were caught cheating using ChatGPT and other artificial intelligence tools during the 2023-24 academic year, according to data obtained by The Guardian. As part of the investigation, the British newspaper contacted 155 universities under the Freedom of Information Act. Of those, 131 institutions responded.
The latest figures show 5.1 confirmed cases of AI-related cheating for every 1,000 students, a rise from 1.6 per 1,000 the previous year. Early projections for the current academic cycle suggest the number could climb even higher to 7.5 per 1,000 students.
The growing reliance on AI tools like ChatGPT is proving to be a major challenge for higher education institutions. At the same time, cases of traditional plagiarism have dropped. From 19 per 1,000 students in 2019-20 to 15.2 last year, the number has gone down and is expected to fall further to 8.5 per 1,000.
Experts warn that the recorded cases may be only scratching the surface. "I would imagine those caught represent the tip of the iceberg," said Dr Peter Scarfe, associate professor of psychology at the University of Reading. "AI detection is very unlike plagiarism, where you can confirm the copied text. As a result, in a situation where you suspect the use of AI, it is near impossible to prove, regardless of the percentage AI that your AI detector says (if you use one). This is coupled with not wanting to falsely accuse students."
Evidence suggests AI misuse is far more widespread than reported. A February survey by the Higher Education Policy Institute found that 88 per cent of students admitted to using AI for assessments. Researchers at the University of Reading tested their own systems last year and found AI-generated submissions went undetected 94 per cent of the time.
Online platforms are making it easier. The report found dozens of videos on TikTok promoting AI paraphrasing and essay-writing tools that help students bypass standard university detectors by "humanising" ChatGPT-generated content.
Dr Thomas Lancaster, an academic integrity researcher at Imperial College London, said, "When used well and by a student who knows how to edit the output, AI misuse is very hard to prove. My hope is that students are still learning through this process."
Science and technology secretary Peter Kyle told The Guardian that AI should be used to "level up" opportunities for dyslexic children.
Tech giants are already targeting students as key users. Google offers university students a free 15-month upgrade to its Gemini AI tool, while OpenAI provides discounted access to students in the US and Canada.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sorry ScaleAI, Mark Zuckerberg investing billions in your company means it's 'Good Bye' from Google, Microsoft and some of the biggest technology companies
Sorry ScaleAI, Mark Zuckerberg investing billions in your company means it's 'Good Bye' from Google, Microsoft and some of the biggest technology companies

Time of India

time3 hours ago

  • Time of India

Sorry ScaleAI, Mark Zuckerberg investing billions in your company means it's 'Good Bye' from Google, Microsoft and some of the biggest technology companies

Google, the largest customer of AI data-labeling startup Scale AI, is planning to sever ties with the company following Facebook parent Meta's acquisition of a 49% stake in Scale, valued at $29 billion, according to five sources familiar with the matter. Tired of too many ads? go ad free now The move, reported by Reuters, has raised concerns among Scale's major clients, including Microsoft and Elon Musk's xAI, who are also reportedly reconsidering their partnerships due to fears of proprietary data exposure to a key rival. The shift underscores growing concerns among AI developers about data security and competitive risks as industry giants like Meta deepen their influence in the AI ecosystem. What is Meta's investment in ScaleAI Meta's $14.3 billion investment in Scale AI, previously valued at $14 billion, includes the transition of Scale's CEO, Alexandr Wang, to Meta, where he will lead efforts to develop 'superintelligence.' This has intensified worries among Scale's clients, particularly generative AI companies, that their sensitive research priorities and technical blueprints could be accessed by Meta through Scale's data-labeling operations. Google, which had planned to pay Scale $200 million this year, has already begun discussions with Scale's competitors to shift its workload, sources said. The company had been diversifying its data service providers for over a year, but Meta's investment has accelerated Google's push to exit all key contracts with Scale, a process that could move quickly due to the structure of data-labeling agreements. Microsoft and xAI are also pulling back, while OpenAI, a smaller Scale customer, scaled down its reliance months ago but will continue working with Scale as one of its many vendors, according to OpenAI's CFO. Why Google and Microsoft leaving is bad news for ScaleAI Scale AI, which serves self-driving car companies, the U.S. government, and generative AI firms, relies heavily on a few major clients. A Scale spokesperson emphasized that the company remains independent and committed to safeguarding customer data, stating, 'Our business remains strong, spanning major companies and governments.' However, the potential loss of key clients like Google could significantly impact Scale's operations.

Apple Paper questions path to AGI, sparks division in GenAI group
Apple Paper questions path to AGI, sparks division in GenAI group

Economic Times

time4 hours ago

  • Economic Times

Apple Paper questions path to AGI, sparks division in GenAI group

New Delhi: A recent research paper from Apple focusing on the limitations of large reasoning models in artificial intelligence has left the generative AI community divided, sparking significant debate whether the current path taken by AI companies towards artificial general intelligence is the right one to take. The paper, titled The Illusion of Thinking, published earlier this week, demonstrates that even the most sophisticated large reasoning models do not genuinely think or reason in a human-like way. Instead, they excel at pattern recognition and mimicry, generating responses that only appear intelligent, but lack true comprehension or conceptual understanding. The study used controlled puzzle environments, such as the popular Tower of Hanoi puzzle, to systematically test reasoning abilities across varying complexities by large reasoning models such as OpenAI's o3 Mini, DeepSeek's R1, Anthropic's Claude 3.7 Sonnet and Google Gemini Flash. The findings show that while large reasoning and language models may handle simple or moderately complex tasks, they experience total failure when faced with high-complexity problems, which occur despite having sufficient computational resources. Gary Marcus, a cognitive scientist and a known sceptic of the claims surrounding large language models, views Apple's work as providing compelling empirical evidence that today's models primarily repeat patterns learned during training from vast datasets without genuine understanding or true reasoning capabilities. "If you can't use a billion-dollar AI system to solve a problem that Herb Simon (one of the actual godfathers of AI, current hype aside) solved with AI in 1957, and that first semester AI students solve routinely, the chances that models like Claude or o3 are going to reach AGI seem truly remote," Marcus wrote in his blog. Marcus' arguments are also echoed in earlier comments of Meta's chief AI scientist Yann LeCun, who has argued that current AI systems are mainly sophisticated pattern recognition tools rather than true thinkers. The release of Apple's paper ignited a polarised debate across the broader AI community, with many panning the design of the study than its findings.A published critique of the paper by researchers from Anthropic and San-Francisco based Open Philanthropy said the study has issues in the experimental design, that it overlooks output an alternate demonstration, the researchers tested the models on the same problems but allowed them to use code, resulting in high accuracy across all the tested models. The critique around the study's failure to take in the output limits and the limitations in coding by the models have also been highlighted by other AI commentators and researchers including Matthew Berman, a popular AI commentator and researcher."SOTA models failed The Tower of Hanoi puzzle at a complexity threshold of >8 discs when using natural language alone to solve it. However, ask it to write code to solve it, and it flawlessly does up to seemingly unlimited complexity," Berman wrote in a post on X (formerly Twitter).The study highlights Apple's more cautious approach to AI compared to rivals like Google and Samsung, who have aggressively integrated AI into their products. Apple's research explains its hesitancy to fully commit to AI, contrasting with the industry's prevailing narrative of rapid questioned the timing of the release of the study, coinciding with Apple's annual WWDC event where it announces its next software across online forums said the study was more about managing expectations in light of Apple's own struggles with said, practitioners and business users argue that the findings do not change the immediate utility of AI tools for everyday applications.

Can AI offer the comfort of a therapist?
Can AI offer the comfort of a therapist?

Time of India

time5 hours ago

  • Time of India

Can AI offer the comfort of a therapist?

One evening, feeling overwhelmed, 24-year-old Delhi resident Nisha Popli typed, 'You're my psychiatrist now,' into ChatGPT. Since then, she's relied on the AI tool to process her thoughts and seek mental support. 'I started using it in late 2024, especially after I paused therapy due to costs. It's been a steady support for six months now,' says Popli. Similarly, a 30-year-old Mumbai lawyer, who uses ChatGPT for various tasks like checking recipes and drafting emails, turned to it for emotional support. 'The insights and help were surprisingly valuable. I chose ChatGPT because it's already a part of my routine.' With AI tools and apps available 24/7, many are turning to them for emotional support. 'More people are increasingly turning to AI tools for mental health support, tackling everything from general issues like dating and parenting to more specific concerns, such as sharing symptoms and seeking diagnoses,' says Dr Arti Shroff, a clinical psychologist. But what drives individuals to explore AI-generated solutions for mental health? WHY USERS ARE USING AI Therapy is expensive 'As someone who values independence, I found therapy financially difficult to sustain,' shares Popli, adding, 'That's when I turned to ChatGPT. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Illinois: Gov Will Cover Your Cost To Install Solar If You Live In These Zips SunValue Learn More Undo I needed a safe, judgment-free space to talk, vent, and process my thoughts. Surprisingly, this AI offered just that — with warmth, logic, and empathy. It felt like a quiet hand to hold.' People feel shy about in-person visits Dr Santosh Bangar, senior consultant psychiatrist, says, 'Many people often feel shy or hesitant about seeking in-person therapy. As a result, they turn to AI tools to express their feelings and sorrows, finding it easier to open up to chatbots. These tools are also useful in situations where accessing traditional therapy is difficult.' Nobody to talk to Kolkata-based Hena Ahmed, a user of the mental health app Headspace, says she started using it after experiencing loneliness. 'I've been using Headspace for about a month now. The AI tool in the app helps me with personalised suggestions on which mindfulness practices I should follow and which calming techniques can help me overcome my loneliness. I was feeling quite alone after undergoing surgery recently and extremely stressed while trying to manage everything. It was responsive and, to a certain extent, quite helpful,' she shares. Users see changes in themselves Mumbai-based 30-year-old corporate lawyer says, 'ChatGPT offers quick solutions and acts as a reliable sounding board for my concerns. I appreciate the voice feature for instant responses. It helps create mental health plans, provides scenarios, and suggests approaches for tackling challenges effectively.' 'My panic attacks have become rare, my overthinking has reduced, and emotionally, I feel more grounded. AI didn't fix me, but it walked with me through tough days—and that's healing in itself,' expresses Popli. CAN AI REPLACE A THERAPIST? Dr Arti expresses, 'AI cannot replace a therapist. Often, AI can lead to incorrect diagnoses since it lacks the ability to assess you in person. In-person interactions provide valuable non-verbal cues that help therapists understand a person's personality and traits.' Echoing similar thoughts, Dr Santosh Bangar, senior consultant psychiatrist, says, 'AI can support mental health by offering helpful tools, but it shouldn't replace a therapist. Chatbots can aid healing, but for serious issues like depression, anxiety, or panic attacks, professional guidance remains essential for safe and effective treatment.' DO CHATBOTS EXPERIENCE STRESS? Researchers found that AI chatbots like ChatGPT-4 can show signs of stress, or 'state anxiety', when responding to trauma-related prompts. Using a recognised psychological tool, they measured how emotionally charged language affects AI, raising ethical questions about its design, especially for use in mental health settings. In another development, researchers at Dartmouth College are working to legitimise the use of AI in mental health care through Therabot, a chatbot designed to provide safe and reliable therapy. Early trials show positive results, with further studies planned to compare its performance with traditional therapy, highlighting AI's growing potential to support mental wellbeing. ARE USERS CONCERNED ABOUT DATA PRIVACY? While some users are reluctant to check whether the data they share during chats is secure, others cautiously approach it. Ahmed says she hasn't considered privacy: 'I haven't looked into the data security part, though. Moving forward, I'd like to check the terms and policies related to it.' In contrast, another user, Nisha, shares: 'I don't share sensitive identity data, and I'm cautious. I'd love to see more transparency in how AI tools safeguard emotional data.' The Mumbai-based lawyer adds, 'Aside from ChatGPT, we share data across other platforms. Our data is already prevalent online, whether through social media or email, so it doesn't concern me significantly.' Experts say most people aren't fully aware of security risks. There's a gap between what users assume is private and what these tools do. Pratim Mukherjee, senior director of engineering at McAfee, explains, 'Many mental health AI apps collect more than what you type—they track patterns, tone, usage, and emotional responses. This data may not stay private. Depending on the terms, your chat history could help train future versions or be shared externally. These tools may feel personal, but they gather data.' Even when users feel anonymous, these tools collect data like IP addresses, device type, and usage patterns. They store messages and uploads, which, when combined, can reveal personal patterns. This data can be used to create profiles for targeted content, ads, or even scams Pratim Mukherjee, senior director of engineering, McAfee Tips for protecting privacy with AI tools/apps - Understand the data the app collects and how it's used - Look for a clear privacy policy, opt-out options, and data deletion features - Avoid sharing location data or limit it to app usage only - Read reviews, check the developer, and avoid apps with vague promises What to watch for in mental health AI apps - Lack of transparency in data collection, storage, or sharing practices - Inability to delete your data - Requests for unnecessary permissions - Absence of independent security checks - Lack of clear information on how sensitive mental health data is used One step to a healthier you—join Times Health+ Yoga and feel the change

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store