Latest news with #AIcheating
Yahoo
19 hours ago
- Automotive
- Yahoo
Why Racing Games Are Lying Cheaters And How They Do It
There's a lot of anecdotal evidence that racing games cheat their human players. For example, on Steam's forum for Horizon Chase Turbo, one player writes: "Why the heck NPC cars don't follow the same rules as the player? Yesterday I was trying to win a 1-v-1 race from the season, like, 30 times, but the effing AI car kept bumping me in the back REPEATEDLY and then just speeding ahead. WTH? If the player bumps someone in the back, they get pushed behind big time—but not the AI, oh no." This is hardly a new complaint. Going all the way back to a 2010 post at The Escapist, another player writes: "I don't think I've played a single racing game where I don't get the impression the AI is cheating in some fashion or other. Usually it's stuff like the other cars/karts/ships always following a perfect path and never falling off the course, to blatantly accelerating to catch up and pass you (Super Mario Kart). I was just reading a review of Forza 2, and it seems even a sim like that does it." Cheating is part of the game for Mario Kart, but Adam Ismail of The Drive (and former Jalopnik writer extraordinaire) points us to a video of Sega GT, the original Xbox competitor to the PlayStation's Gran Turismo before the Forza franchise existed. Thanks to a data overlay of power multipliers for each car, we can see exactly how the game nerfs the front-runners and helps the backmarkers. Read more: These Are The Worst New Car And SUV Deals Right Now, According To Consumer Reports The Numbers Don't Lie In this particular race, there is no human player. All the cars are computer-controlled. To its credit, the computer drives these cars much better than the real thing. At the bottom right of the screen is a display of the power multiplier being applied to each of the eight positions in the race. Except before the race starts, none of them is ever 1, showing that every car is faster than it should be. Each car is about twice as powerful as it should be for the first lap or so, except when cornering. As the race goes on, we can see the multiplier drop significantly for the cars in front, but remain high for the cars in back, giving them a significant advantage. No human player would have a chance. Granted, this is a game, not a simulator like iRacing, so it's probably unreasonable to expect much in the way of realism. Handicapping the field on the fly makes for closer racing in a dynamic way that no NASCAR restrictor plate can duplicate. But when it feels like everyone is faster than you, even if you're driving what's supposed to be a faster car, the game gets more frustrating than fun and isn't worth playing anymore. It seems that we couldn't trust AI even back in the early days of racing games. Still, it could be worse. Nothing's as bad as the virtually unplayable Intellivision Auto Racing. Want more like this? Join the Jalopnik newsletter to get the latest auto news sent straight to your inbox... Read the original article on Jalopnik.


The Guardian
23-06-2025
- Science
- The Guardian
There's no simple solution to universities' AI worries
I enjoyed the letter from Dr Craig Reeves (17 June) in which he argues that higher education institutions are consciously choosing not to address widespread cheating using generative AI so as not to sacrifice revenues from international students. He is right that international students are propping up the UK's universities, of which more than two-fifths will be in deficit by the end of this academic year. But it is untrue that universities could simply spot AI cheating if they wanted to. Dr Reeves says that they should use AI detectors, but the studies that he quotes rebut this argument. The last study he cites (Perkins et al, 2024) shows that AI detectors were accurate in fewer than 40% of cases, and that this fell to just 22% of 'adversarial' cases – when the use of AI was deliberately obscured. In other words, AI detectors failed to spot that AI had been used three‑quarters of the time. That is why it is wrong to say there is a simple solution to the generative AI problem. Some universities are pursuing academic misconduct cases with verve against students who use AI. But because AI leaves no trace, it is almost impossible to definitively show that a student used AI, unless they admit it. In the meantime, institutions are switching to 'secure' assessments, such as the in-person exams he celebrates. Others are designing assessments assuming students will use AI. No one is saying universities have got everything right. But we shouldn't assume conspiracy when confusion is the simpler FreemanPolicy manager, Higher Education Policy Institute; author, Student Generative AI Survey 2025 The use of AI to 'write' things in higher education has prompted significant research and discussion in institutions, and the accurate reporting of that research is obviously important. Craig Reeves mentions three papers in support of the Turnitin AI checker, claiming that universities opted out of this function without testing it because of fears over false positive flagging of human-written texts as AI generated. One of those papers says: 'The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text' (Weber-Wulff et al); and a second found Turnitin to be the second worst of the seven AI detectors tested for flagging AI generated texts, with 84% undetected (Perkins et al). An AI detector can easily avoid false positives by not flagging any texts. We need to think carefully about how we are going to assess work, when at a click almost limitless superficially plausible text can be Paul JohnsonUniversity of Chester In an otherwise well thought out critique of the apparent (and possibly convenient) blind spot higher education has for the use of AI, Craig Reeves appears to be encouraging a return to traditional examinations as a means of rooting out the issue. While I sympathise (and believe strongly that something should be done), I hope that this return to older practices will not happen in a 'one size fits all' manner. I have marked examinations for well over 30 years. During that period I have regularly been impressed by students' understanding of a topic; I can remember only enjoying reading one examination essay. The others, no matter how good, read like paranoid streams of consciousness. A central transferable skill that degrees in the humanities offer is the ability to write well and cogently about any given topic after research. Examinations don't – can't – offer that. I would call for a move towards more analytical assessment, where students are faced with new material that must be considered in a brief period. I think that the move away from traditional essays as the sole form of assessment might help to lessen (not, of course, halt) the impact of external input. From experience, this focus also helps students move towards application of new understanding, rather than a passive digestion of Robert McColl MillarChair in linguistics and Scottish language, University of Aberdeen Have an opinion on anything you've read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.


The Guardian
17-06-2025
- Science
- The Guardian
Universities face a reckoning on ChatGPT cheats
I commend your reporting of the AI scandal in UK universities (Revealed: Thousands of UK university students caught cheating using AI, 15 June), but 'tip of the iceberg' is an understatement. While freedom of information requests inform about the universities that are catching AI cheating, the universities that are not doing so are the real problem. In 2023, a widely used assessment platform, Turnitin, released an AI indicator, reporting high reliability from huge-sample tests. However, many universities opted out of this indicator, without testing it. Noise about high 'false positives' circulated, but independent research has debunked these concerns (Weber-Wulff et al 2023; Walters 2023; Perkins et al, 2024). The real motivation may be that institutions relying on high-fee-paying international cohorts would rather not know; the motto is 'see no cheating, hear no cheating, lose no revenue'. The political economy of higher education is driving a scandal of unreliable degree-awarding and the deskilling of graduates on a mass scale. Institutions that are biting the bullet, like mine, will struggle with the costs of running rigorous assessments, but know the costs of not doing so will be far greater. If our pilots couldn't fly planes themselves or our surgeons didn't know our arses from our elbows, we'd be worried – but we surely want our lawyers, teachers, engineers, nurses, accountants, social workers etc to have real knowledge and skills too. A sector sea change is under way, with some institutions publicly adopting proper exams (maligned as old-fashioned, rote-learning, unrealistic etc) that test what students can actually do themselves. Institutions that are resistant to ripping off the plaster of convenient yet compromised assessments will, I'll wager, have to some day explain themselves to a public Craig ReevesBirkbeck, University of London Have an opinion on anything you've read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.


The Guardian
15-06-2025
- Science
- The Guardian
Revealed: Thousands of UK university students caught cheating using AI
Thousands of university students in the UK have been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism show a marked decline, a Guardian investigation can reveal. A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23. Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts. The data highlights a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools. In 2019-20, before the widespread availability of generative AI, plagiarism accounted for nearly two-thirds of all academic misconduct. During the pandemic, plagiarism intensified as many assessments moved online. But as AI tools have become more sophisticated and accessible, the nature of cheating has changed. The survey found that confirmed cases of traditional plagiarism fell from 19 per 1,000 students to 15.2 in 2023-24 and is expected to fall again to about 8.5 per 1,000, according to early figures from this academic year. The Guardian contacted 155 universities under the Freedom of Information Act requesting figures for proven cases of academic misconduct, plagiarism and AI misconduct in the last five years. Of these, 131 provided some data – though not every university had records for each year or category of misconduct. More than 27% of responding universities did not yet record AI misuse as a separate category of misconduct in 2023-24, suggesting the sector is still getting to grips with the issue. Many more cases of AI cheating may be going undetected. A survey by the Higher Education Policy Institute in February found 88% of students used AI for assessments. Last year, researchers at the University of Reading tested their own assessment systems and were able to submit AI-generated work without being detected 94% of the time. Dr Peter Scarfe, an associate professor of psychology at the University of Reading and co-author of that study, said there had always been ways to cheat but that the education sector would have to adapt to AI, which posed a fundamentally different problem. He said: 'I would imagine those caught represent the tip of the iceberg. AI detection is very unlike plagiarism, where you can confirm the copied text. As a result, in a situation where you suspect the use of AI, it is near impossible to prove, regardless of the percentage AI that your AI detector says (if you use one). This is coupled with not wanting to falsely accuse students. 'It is unfeasible to simply move every single assessment a student takes to in-person. Yet at the same time the sector has to acknowledge that students will be using AI even if asked not to and go undetected.' Students who wish to cheat undetected using generative AI have plenty of online material to draw from: the Guardian found dozens of videos on TikTok advertising AI paraphrasing and essay writing tools to students. These tools help students bypass common university AI detectors by 'humanising' text generated by ChatGPT. Dr Thomas Lancaster, an academic integrity researcher at Imperial College London, said: 'When used well and by a student who knows how to edit the output, AI misuse is very hard to prove. My hope is that students are still learning through this process.' Harvey* has just finished his final year of a business management degree at a northern English university. He told the Guardian he had used AI to generate ideas and structure for assignments and to suggest references, and that most people he knows used the tool to some extent. 'ChatGPT kind of came along when I first joined uni, and so it's always been present for me,' he said. 'I don't think many people use AI and then would then copy it word for word, I think it's more just generally to help brainstorm and create ideas. Anything that I would take from it, I would then rework completely in my own ways. 'I do know one person that has used it and then used other methods of AI where you can change it and humanise it so that it writes AI content in a way that sounds like it's come from a human.' Amelia* has just finished her first year of a music business degree at a university in the south-west. She said she had also used AI for summarising and brainstorming, but that the tools had been most useful for people with learning difficulties. 'One of my friends uses it, not to write any of her essays for her or research anything, but to put in her own points and structure them. She has dyslexia – she said she really benefits from it.' The science and technology secretary, Peter Kyle, told the Guardian recently that AI should be deployed to 'level up' opportunities for dyslexic children. Technology companies appear to be targeting students as a key demographic for AI tools. Google offers university students a free upgrade of its Gemini tool for 15 months, and OpenAI offers discounts to college students in the US and Canada. Lancaster said: 'University-level assessment can sometimes seem pointless to students, even if we as educators have good reason for setting this. This all comes down to helping students to understand why they are required to complete certain tasks and engaging them more actively in the assessment design process. 'There's often a suggestion that we should use more exams in place of written assessments, but the value of rote learning and retained knowledge continues to decrease every year. I think it's important that we focus on skills that can't easily be replaced by AI, such as communication skills, people skills, and giving students the confidence to engage with emerging technology and to succeed in the workplace.' A government spokesperson said it was investing more than £187m in national skills programmes and had published guidance on the use of AI in schools. They said: 'Generative AI has great potential to transform education and provides exciting opportunities for growth through our plan for change. However, integrating AI into teaching, learning and assessment will require careful consideration and universities must determine how to harness the benefits and mitigate the risks to prepare students for the jobs of the future.' *Names have been changed.