
Only one in six back reducing or removing coursework to avoid AI misuse
A YouGov survey, commissioned by Cambridge University Press & Assessment, found 89% believed it was 'unacceptable' for pupils to use AI to entirely complete their school coursework.
But the poll, of 2,221 adults in the UK, found nearly half (46%) believe it is acceptable for school pupils to use AI to improve the punctuation and grammar in their coursework, while 44% did not.
Only 16% of UK adults believe reducing or removing coursework completed at home is the best way for schools to avoid student AI misuse.
The chief executive of exam board OCR is calling for a co-ordinated national strategy on AI.
It comes after the independent curriculum and assessment review said it would consider reducing the 'overall volume of assessment' at GCSE.
But the interim report, published in March, said the review had heard about the 'risks' to standards and fairness concerning AI in relation to coursework.
The review – chaired by education expert Professor Becky Francis – will publish its final recommendations in the autumn.
The YouGov survey, which was carried out in June, suggests more than three in five (62%) of UK adults oppose teachers using AI to mark coursework, while 27% support it.
But the majority (59%) support teachers using AI to complete their administrative tasks, such as lesson planning.
Jill Duffy, chief executive of OCR, said: 'AI is already in our schools and is not going away.
'A co-ordinated national strategy, with funding to ensure no schools are left behind, will build public confidence in its transformational potential.
'The public is clear that coursework is too important to lose, even in the age of AI.
'It enables us to test different skills, and to reduce the intense volume of exams taken at 16.
'These findings should be seen as a challenge to all of us in education: find a way to adapt coursework so it is fit for the AI century.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Reuters
7 minutes ago
- Reuters
Microsoft launches AI-based Copilot Mode in Edge browser
July 28 (Reuters) - Microsoft (MSFT.O), opens new tab on Monday launched a new "Copilot Mode" on its Edge browser that uses artificial intelligence to improve the browsing experience, as the tech giant rushes to compete with rivals launching AI-based browsers and web search features. Copilot Mode can help carry out tasks, organize browsing into topic-based queries and compare results across all open tabs without requiring users to switch between them, Microsoft said. The update comes at a time when tech companies have rolled out several AI-powered online search tools. Nvidia-backed startup Perplexity AI launched its Comet browser earlier this month, while Reuters reported on OpenAI's upcoming AI browser. Rival Alphabet (GOOGL.O), opens new tab earlier this year launched "AI Mode", an AI-only search feature on Google, and last week reported improvements in user engagement on such features. Microsoft said, under the new feature, users will see a page with a single input box combining chat, search and web navigation features. Copilot will also support voice navigation for browsing in Edge, the company added. The users will also soon be able to give Copilot permission to access additional browser context, such as user history and credentials, to take more concrete actions like making bookings or managing errands. Copilot will only be able to access browsing content when users enable it, and will provide visual cues to let users know when it is active in the background, Microsoft said. The feature will be available for free across Copilot markets in Windows and Mac PCs for a limited time and users can choose to opt out of it through Edge settings.


The Herald Scotland
2 hours ago
- The Herald Scotland
Why we must keep humans at the heart of AI in warfare
Since 2016, discussions of the Convention on Certain Conventional Weapons Group of Governmental Experts on LAWS have been ongoing, but International Humanitarian Law (IHL) still lacks any specific, binding regulations relating to AI. As noted by International Committee of the Red Cross (ICRC) President Mirjana Spoljaric, AI in war is 'no longer an issue for tomorrow', but rather 'an urgent humanitarian priority today', requiring the immediate 'negotiation of new legally binding international rules'. Accordingly, United Nations Secretary General António Guterres recommended, in his 2023 New Agenda for Peace, that 'a legally binding instrument' to prohibit and/or regulate AI weapons be concluded by 2026. Read more The ICRC has stressed that responsibility in warfare must remain with humans. 'Human control must be maintained,' it argues, and limits on autonomy urgently established 'to ensure compliance with international law and to satisfy ethical concerns'. In 2022, the MoD itself echoed this sentiment. It stated that only human soldiers 'can make instinctive decisions on the ground in a conflict zone; improvise on rescue missions during natural disasters; or offer empathy and sympathy.' The then Defence Secretary Ben Wallace added that 'at its heart, our Army relies on the judgment of its own individuals.' A recruitment campaign at the time carried the tagline: 'Technology will help us do incredible things. But nothing can do what a soldier can do.' Colonel Nick Mackenzie, then Assistant Director for Recruitment, highlighted that, while 'technology is really, really important… there is always somebody, a person, behind that technology,' who is ultimately responsible for its use and the decisions it enables. Since then, however, the use of AI-enabled rapid target identification systems in contemporary conflicts has grown rapidly, with notable examples being Lavender and Where's Daddy (Israel/Palestine), Saker and Wolly (Russia/Ukraine). A human being is generally still required in order to engage any lethal effects, but technological capabilities are already being developed to remove human input from the targeting process altogether. Against this backdrop, the MoD's Strategic Defence Review 2025, released last month, calls for 'greater use of autonomy and Artificial Intelligence within the UK's conventional forces' to deliver 'greater accuracy, lethality, and cheaper capabilities'. 'As in Ukraine,' the Review continues, 'this would provide greater accuracy, lethality, and cheaper capabilities – changing the economics of defence.' One example is Project ASGARD, which will help the Army locate and strike enemy targets at greater distances using AI as a 'force multiplier'. This is just one of over 400 AI-related projects being run by the MoD. What remains unclear, but is critical from a legal and moral perspective, is what role human judgment will play in these projects and the military operations they support. Computer scientist Pei Wang has said that while AI can behave like human intelligence in some ways, it is fundamentally different. AI shouldn't replace human intelligence, but rather support and enhance it – helping people make better-informed decisions. Human-robot interaction specialist Karolina Zawieska warns of the need to distinguish between what is human and what is only human-like. AI systems often function as a 'black box', meaning it is not always clear how or why they produce certain outcomes. This creates serious problems for human understanding, control, and accountability. When properly used, AI can support situational awareness and help human operators make better decisions. In this sense, it is a tool – not a decision-maker. But if too much control is handed over to AI, we risk removing human judgment and with it, moral responsibility. Professor Jeff McMahan, moral philosopher at the Oxford Institute for Ethics, Law and Armed Conflict, has argued that it is essential for combatants to feel 'deep inhibitions about tackling non-combatants'. However accurate or efficient AI may be, these inhibitions cannot be replicated by algorithms. As political scientist Valerie Morkevičius has pointed out, the emotional and moral 'messiness' of war is a feature, not a flaw because it slows down violence and prompts ethical reflection. Military decisions should be difficult. This is why human judgment must remain at the centre. While defence and national security are reserved for Westminster, Scotland plays a key role in UK defence, from the bases at Faslane and Lossiemouth to the defence research carried out at Scottish universities. The issues raised in the Strategic Defence Review therefore carry particular relevance here. UN Secretary General António Guterres has recommended that 'a legally binding instrument' to prohibit and/or regulate AI weapons be concluded by 2026 (Image: Getty) Scotland's approach to AI, shaped by the AI Strategy (2021) and the Scottish AI Playbook (2024), is notably human-centred. Informed by Organisation for Economic Cooperation and Development's (OECD) principles, both documents stress the importance of trustworthy, ethical, and inclusive AI that improves people's lives. They highlight the need for transparency, human control, and robust accountability. Though not military in scope, these principles nevertheless offer a useful framework for a Scottish perspective on the development and use of AI for military purposes: keeping people at the centre, and ensuring that technology supports rather than replaces human agency. The goal should not be the delegation of human decisions to machines, or the replacement of human beings with technology. Rather, AI should support and strengthen human decision-making – a tool for the enactment of human agency: a technological means for strictly human ends. Dr Joanna LD Wilson is a Lecturer in Law at the University of the West of Scotland


Times
2 hours ago
- Times
Boardrooms are grappling with AI but we must embrace it
I n a boardroom filled with some of our country's business leaders last week, the conversation turned to artificial intelligence. The room split into two camps: half saw AI as an existential threat, the other as a panacea. Both sides are going to be wrong. The uncomfortable reality is that every white-collar job in Britain will be transformed by AI. Some roles will vanish, others will emerge, but no position will remain untouched. The question isn't whether this transformation will occur, but whether your company or employer will embrace the change or be swept aside by it. The responsibility isn't just for the leaders of the UK companies but upon all of us. Entrepreneurs pride themselves on their ability to adapt and deal with the unexpected. Yet I see evidence every day that business leaders at all levels — from founders to established CEOs — have their heads in the sand when it comes to AI. The government announced a 'strategic partnership' last week with OpenAI (where my family's company, Rigby Group, owns a small stake) in a positive move aimed at streamlining public services. The business community must take note — and act.