logo
UCT ends use of AI detection tools for student assessments

UCT ends use of AI detection tools for student assessments

IOL News2 days ago
Students at UCT welcome the university's move to discontinue unreliable AI detection tools.
Image: AP Photo/Michael Dwyer, File
In a move likely to be welcomed by students, the University of Cape Town (UCT) has announced that it will discontinue the use of AI detection tools, such as Turnitin's AI Score, effective October 1.
The decision follows the endorsement of UCT's AI in Education Framework by the Senate Teaching and Learning Committee in June.
According to the university, the framework prioritises ethical AI literacy, assessment integrity, and innovative curriculum design over reliance on automated surveillance tools.
The university's Deputy Vice-Chancellor for Teaching and Learning, Professor Brandon Collier-Reed, raised concerns regarding the reliability of AI detection tools, noting that they are widely considered to be unreliable and can produce both false positives and false negatives.
"An important related issue is the use of the Turnitin AI Score, which flags passages of writing in student work considered to be AI-generated," Reed said in a communique directed to students and staff.
"AI detection tools are widely considered to be unreliable, and can produce both false positives and false negatives.
"The continued use of such scores risks compromising student trust and academic fairness. The Senate Teaching and Learning Committee has now agreed that use of the AI Score should be discontinued, and this feature in Turnitin will no longer be available at UCT from October 1, 2025".
Video Player is loading.
Play Video
Play
Unmute
Current Time
0:00
/
Duration
-:-
Loaded :
0%
Stream Type LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
0:00
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text Color White Black Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Transparent Semi-Transparent Opaque
Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps
Reset
restore all settings to the default values Done
Close Modal Dialog
End of dialog window.
Advertisement
Next
Stay
Close ✕
The university's decision comes amid global challenges with AI detection tools, which have frequently resulted in students being mistakenly accused of using AI-generated content.
Universities worldwide are being forced to reconsider how they monitor and assess the use of AI in student work.
Collier-Reed added that the endorsement of the framework was a result of UCT moving with the times, as 'artificial intelligence technologies are becoming part of our daily lives'.
'Staff and students are using tools such as ChatGPT, Claude and Gemini in innovative and productive ways; while at other times these technologies present significant challenges, requiring us to rethink teaching pedagogies, assessment practices and the fundamentals of what a university experience should be, and what our qualifications should signify in a changing world,' he said. Get your news on the go, click here to join the Cape Argus News WhatsApp channel.
Cape Argus
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

China summons chip giant Nvidia over alleged security risks
China summons chip giant Nvidia over alleged security risks

eNCA

time15 hours ago

  • eNCA

China summons chip giant Nvidia over alleged security risks

Chinese authorities summoned Nvidia representatives on Thursday to discuss "serious security issues" over some of its artificial intelligence chips, as the US tech giant finds itself entangled in trade tensions between Beijing and Washington. Nvidia is a world-leading producer of AI semiconductors, but the United States effectively restricts which chips it can export to China on national security grounds. A key issue has been Chinese access to the "H20", a less powerful version of Nvidia's AI processing units that the company developed specifically for export to China. The California-based firm said this month it would resume H20 sales to China after Washington pledged to remove licensing curbs that had halted exports. But the firm still faces obstacles -- US lawmakers have proposed plans to require Nvidia and other manufacturers of advanced AI chips to include built-in location tracking capabilities. And Beijing's top internet regulator said Thursday it had summoned Nvidia representatives to discuss recently discovered "serious security issues" involving the H20. The Cyberspace Administration of China said it had asked Nvidia to "explain the security risks of vulnerabilities and backdoors in its H20 chips sold to China and submit relevant supporting materials". The statement posted on social media noted that, according to US experts, location tracking and remote shutdown technologies for Nvidia chips "are already matured". The announcement marked the latest complication for Nvidia in selling its advanced products in the key Chinese market, where it is in increasingly fierce competition with homegrown technology firms. - Nvidia committed - CEO Jensen Huang said during a closely watched visit to Beijing this month that his firm remained committed to serving local customers. Huang said he had been assured during talks with top Chinese officials during the trip that the country was "open and stable". "They want to know that Nvidia continues to invest here, that we are still doing our best to serve the market here," he said. Nvidia this month became the first company to hit $4 trillion in market value -- a new milestone in Wall Street's bet that AI will transform the global economy. Jost Wubbeke of the Sinolytics consultancy told AFP the move by China to summon Nvidia was "not surprising in the sense that targeting individual US companies has become a common tool in the context of US-China tensions". "What is surprising, however, is the timing," he noted, after the two countries agreed to further talks to extend their trade truce. "China's action may signal a shift toward a more assertive stance," Wubbeke said. Beijing is also aiming to reduce reliance on foreign tech by promoting Huawei's domestically developed 910C chip as an alternative to the H20, he added. "From that perspective, the US decision to allow renewed exports of the H20 to China could be seen as counterproductive, as it might tempt Chinese hyperscalers to revert to the H20, potentially undermining momentum behind the 910C and other domestic alternatives." New hurdles to Nvidia's operation in China come as the country's economy wavers, beset by a years-long property sector crisis and heightened trade headwinds under US President Donald Trump. Chinese President Xi Jinping has called for the country to enhance self-reliance in certain areas deemed vital for national security -- including AI and semiconductors -- as tensions with Washington mount. The country's firms have made great strides in recent years, with Huang praising their "super-fast" innovation during his visit to Beijing this month. By Peter Catterall

The silent thief: AI exploits creators under the guise of innovation
The silent thief: AI exploits creators under the guise of innovation

Mail & Guardian

time21 hours ago

  • Mail & Guardian

The silent thief: AI exploits creators under the guise of innovation

As we ride the wave of technological advancement, we must ensure that innovation does not come at the cost of exploitation. As artificial intelligence (AI) continues to astonish the world with its capabilities, from writing articles and generating images to composing music and producing reports, there is an urgent, overlooked reality that demands our attention — the silent, systematic exploitation of intellectual property by AI systems. While society celebrates innovation, many creators remain muted, their voices drowned out by the roar of technological progress. Their books, music, artwork and more are being used to train machine learning models; the data informs the patterns the algorithms learn, often without the creators' consent, credit or compensation. Behind the promise of technological advancement is a quiet but pervasive form of abuse: AI masquerades as innovation. The legal, ethical and cultural implications of AI unchecked require urgent policy responses. Generative AI systems, such as large language models (LLMs) and image generators, rely on data, much of which is derived from human-created books, articles and artworks. Most of these systems are trained on large datasets containing copyright content scraped from the internet, including subscription-based platforms and pirated sources. Although this is done under the legal doctrine of 'fair use', which is peculiar to the United States, the fairness of that usage is indeed questionable. When a creator's life work is repurposed to drive a billion dollar AI enterprise without their awareness or permission, this raises serious concerns of intellectual property (copyright) infringement. Recent legal battles in the US have brought this issue to the forefront. Authors, including David Baldacci and John Grisham, have acted against OpenAI for using their books in training datasets. The plaintiffs allege that OpenAI copied their works wholesale, without permission. As of now, the case remains unresolved, but it has already sparked global debate about ownership, consent, and compensation in the AI era. It is commendable that countries in the European Union have resorted to making use of the 'Opt-in' system. The European Union's General Data Protection Regulation, or GDPR, is a prime example of the opt-in consent regime. The DPR requires a data subject's consent to be freely given, specific, informed and unambiguous. It is a framework that contrasts sharply with the 'opt-out' model, which treats silence as consent (European Commission, 2023). The EU's approach affirms the creator's right to decide how their work is used. This model offers a compelling blueprint that African countries should seriously consider adopting. Africa's creative industries, from our musicians and poets to fashion designers and filmmakers are unique and increasingly recognised on the global stage. Yet, they remain underprotected. We lack comprehensive AI policies, and enforcement of our copyright laws is weak. If we do not act now, our artists' voices may be digitised, globalised and monetised without them ever knowing or benefiting. We must demand and get involved in making sure that AI systems trained on African content ensure transparency, compensation and consent. Our lawmakers should champion an 'opt-in' regime that aligns with ethical standards being proposed in other parts of the world. If African creativity is valuable enough to train billion-dollar platforms, then it is valuable enough to protect. This issue is not only legal, it is ethical. Creativity is not merely data. Every poem, painting or photograph represents hours of human thought, feeling and labour. To treat such expressions as mere raw material for machines, without recognition or reward, is to devalue the soul of human creativity. Africa, often excluded from global intellectual property conversations, must not remain silent. Our policymakers must strengthen copyright laws, create ethical frameworks for AI development and prevent the exploitation of African content by international tech firms. To strike a balance between AI innovation and intellectual property protection, clear legal frameworks that promote responsible AI development while safeguarding creators' rights must be developed. This includes transparent licencing systems such as opt-in or opt-out mechanisms for the use of copyrighted content in training datasets; mandating disclosure of data sources; and creating fair compensation models for creators. Yes, AI can empower us but only if it respects the very people who make creativity possible. As we ride the wave of technological advancement, we must ensure that innovation does not come at the cost of exploitation. Rachelle Anesu Chaminuka is a legal professional with expertise in entrepreneurship and intellectual property.

UCT ends use of AI detection tools for student assessments
UCT ends use of AI detection tools for student assessments

IOL News

time2 days ago

  • IOL News

UCT ends use of AI detection tools for student assessments

Students at UCT welcome the university's move to discontinue unreliable AI detection tools. Image: AP Photo/Michael Dwyer, File In a move likely to be welcomed by students, the University of Cape Town (UCT) has announced that it will discontinue the use of AI detection tools, such as Turnitin's AI Score, effective October 1. The decision follows the endorsement of UCT's AI in Education Framework by the Senate Teaching and Learning Committee in June. According to the university, the framework prioritises ethical AI literacy, assessment integrity, and innovative curriculum design over reliance on automated surveillance tools. The university's Deputy Vice-Chancellor for Teaching and Learning, Professor Brandon Collier-Reed, raised concerns regarding the reliability of AI detection tools, noting that they are widely considered to be unreliable and can produce both false positives and false negatives. "An important related issue is the use of the Turnitin AI Score, which flags passages of writing in student work considered to be AI-generated," Reed said in a communique directed to students and staff. "AI detection tools are widely considered to be unreliable, and can produce both false positives and false negatives. "The continued use of such scores risks compromising student trust and academic fairness. The Senate Teaching and Learning Committee has now agreed that use of the AI Score should be discontinued, and this feature in Turnitin will no longer be available at UCT from October 1, 2025". Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ The university's decision comes amid global challenges with AI detection tools, which have frequently resulted in students being mistakenly accused of using AI-generated content. Universities worldwide are being forced to reconsider how they monitor and assess the use of AI in student work. Collier-Reed added that the endorsement of the framework was a result of UCT moving with the times, as 'artificial intelligence technologies are becoming part of our daily lives'. 'Staff and students are using tools such as ChatGPT, Claude and Gemini in innovative and productive ways; while at other times these technologies present significant challenges, requiring us to rethink teaching pedagogies, assessment practices and the fundamentals of what a university experience should be, and what our qualifications should signify in a changing world,' he said. Get your news on the go, click here to join the Cape Argus News WhatsApp channel. Cape Argus

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store