
Turning off AI detection software is the right call for SA universities
The recently announced University of Cape Town decision to disable Turnitin's AI detection feature is to be welcomed – and other universities would do well to follow suit. This move signals a growing recognition that AI detection software does more harm than good.
The problems with Turnitin's AI detector extend far beyond technical glitches. The software's notorious tendency towards false positives has created an atmosphere where students live in constant fear of being wrongly accused of academic dishonesty.
Unlike their American counterparts, South African students rarely pursue legal action against universities, but this should not be mistaken for acceptance of unfair treatment.
A system built on flawed logic
As Rebecca Davis has pointed out in Daily Maverick: detection tools fail. The fundamental issue lies in how these detection systems operate. Turnitin's AI detector doesn't identify digital fingerprints that definitively prove AI use. Instead, it searches for stylistic patterns associated with AI-generated text.
The software might flag work as likely to be AI-generated simply because the student used em-dashes or terms such as 'delve into' or 'crucial' – a writing preference that has nothing to do with artificial intelligence.
This approach has led to deeply troubling situations. Students report receiving accusatory emails from professors suggesting significant portions of their original work were AI-generated.
One student described receiving such an email indicating that Turnitin had flagged 30% of her text as likely to be AI-generated, followed by demands for proof of originality: multiple drafts, version history from Google Docs, or reports from other AI detection services like GPTZero.
The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system.
Other academics have endorsed the use of services like Grammarly Authorship or Turnitin Clarity for students to prove their work is their own.
The burden of proof has been reversed: students are guilty until proven innocent, a principle that would be considered unjust in any legal system and is pedagogically abhorrent in an educational context.
The psychological impact cannot be overstated; students describe feeling anxious about every assignment, second-guessing their natural writing styles, and living under a cloud of suspicion despite having done nothing wrong.
The absurdity exposed
The unreliability of these systems becomes comically apparent when examined closely. The student mentioned above paid $19 to access GPTZero, another AI detection service, hoping to clear her name. The results were revealing: the programs flagged different portions of her original work as AI-generated, with only partial overlap between their accusations.
Even more telling, both systems flagged the professor's own assignment questions as AI-generated, though the Turnitin software flagged Question 2 while GPTZero flagged Question 4. Did the professor use ChatGPT to write one of the questions, both, or neither? The software provides no answers.
This inconsistency exposes the arbitrary nature of AI detection. If two leading systems cannot agree on what constitutes AI-generated text, and both flag the professor's own questions as suspicious, how can any institution justify using these tools to make academic integrity decisions?
Gaming the system
While South African universities have been fortunate to avoid the litigation that has plagued American institutions, the experiences across the Atlantic serve as a stark warning.
A number of US universities have abandoned Turnitin after facing lawsuits from students falsely accused of using AI. Turnitin's terms and conditions conveniently absolve the company of responsibility for these false accusations, leaving universities to face the legal and reputational consequences alone.
The contrast with Turnitin's similarity detection tool is important. While that feature has its own problems, primarily academics assuming that the percentage similarity is an indicator of the amount of plagiarism, at least it provides transparent, visible comparisons that students can review and make sense of.
The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system.
Undermining educational relationships
Perhaps most damaging is how AI detection transforms the fundamental relationship between educators and students. When academics become primarily focused on catching potential cheaters, the pedagogical mission suffers.
Education is inherently relational, built on trust, guidance and collaborative learning. AI detection software makes this dynamic adversarial, casting educators as judges, AI detection as the evidence and students as potential criminals.
The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology
The lack of transparency compounds this problem. Students cannot see the AI detection reports that are being used against them, cannot understand the reasoning behind the accusations and cannot meaningfully defend themselves against algorithmic judgements.
This violates basic principles of fairness and due process that should govern any academic integrity system.
A path forward
UCT's decision to disable Turnitin's AI detector represents more than just abandoning a problematic tool. It signals a commitment to preserving the educational relationship and maintaining trust in our universities. Other institutions following suit demonstrate that the South African higher education sector is willing to prioritise pedagogical principles over technological convenience.
This doesn't mean ignoring the challenges that AI presents to academic integrity. Rather, it suggests focusing on educational approaches that help students understand appropriate AI use, develop critical thinking skills and cultivate a personal relationship with knowledge.
When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose.
The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology.
Detection should give way to education, suspicion to support and surveillance to guidance. When we position students as already guilty, we shouldn't be surprised that they respond by trying to outwit our systems rather than engaging with the deeper questions about learning and integrity that AI raises.
The anxiety reported by students who feel constantly watched and judged represents a failure of educational technology to serve educational goals. When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose.
UCT and other South African universities deserve recognition for prioritising student welfare and educational relationships over the false security of flawed detection software. Their decision sends a clear message: technology should serve education, not the other way around.
As more institutions grapple with AI's impact on higher education, South Africa's approach offers a valuable model; one that chooses trust over surveillance, education over detection and relationships over algorithms.
In an era of rapid technological change, this commitment to fundamental educational values provides a steady foundation for navigating uncertainty.
The future of academic integrity lies not in better detection software, but in better education about integrity itself. DM
Sioux McKenna is professor of higher education studies at Rhodes University.
Neil Kramm is an educational technology specialist in the Centre of Higher Education Research, Teaching and Learning (CHERTL) at Rhodes University. He is currently completing his PhD on AI and its influence on assessment in higher education.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Eyewitness News
4 hours ago
- Eyewitness News
Losing grace: The human cost of AI efficiency
Jamil F. Khan 7 August 2025 | 15:19 Jamil F. Khan Artificial Intelligence (AI) Picture: Pixabay/geralt The proliferation of AI technology is proceeding at incredible speed, or at least it feels that way to those of us who have not acquiesced to its seductive wiles. Its uptake has been encouraged by the fundamental promise of ease. As a tool, AI offers to think for you, to try for you, to labour for you, to any extent you would choose. Though the ethics of its offering are widely debated, and people have used AI for dishonest ends, it is an assistant to many. The evolution of technology for the purposes of improving human life is a trend that has persisted since the beginning of social formation. AI displays this potential, to make life easier, but it comes in a time of reckless capitalist expansion coupled with an aversion to truth. Though it poses many threats, including to jobs, the value of creativity and intellectual property, my concern currently lies beyond these parameters. I am concerned for what AI will do to our already poor cultures of accountability, while also eradicating whatever measures of grace we have agreed to in our working relationships with each other. Institutions, however slowly, are creating policies to deal with the implications of using AI as an everyday tool to perform professional functions, but what happens when the standards of efficiency and quality are no longer determined by human beings? Are policies being created with this impact on human relations in mind? My concern stems from the idea that the professional standards set for people's outputs can soon be determined by the capabilities of machines, which can be programmed to eradicate human error – not withstanding that its design will still be prone to that same error. If a machine can do it in minutes with no need for meticulous checking, could human error in the delivery of work become unacceptable? While the rollout of AI is still subject to the accessibility of the basic resources that make it possible, such as internet connection, the supposed benefits it brings remain constrained by the very same inequalities developers are happy to overlook. When referring to cultures of accountability, I am aware that this is hardly a reality, however, when it comes to organisational structures there are processes that pursue accountability from people when things go wrong. The punitive element of this culture, which is not actually what accountability is about, is a site of much workplace abuse and intolerance, and for that reason most people would rather avoid accountability than adjust their behaviour. When offered a machine to which all responsibility can be deferred, an avoidance of responsibility seems like a great deal. The danger in this avoidance of responsibility is not only about one's responsibility to deliver good work, if that is of interest, but also the responsibility to our co-workers. We bear responsibility to each other because we understand that we need each other in order to perform our functions. This is what it means to live in a society, despite the aggressive 'individualist' messaging we receive under capitalism. Our co-workers can be a blessing or a nightmare, depending on the relationship we have with them. Individualism is of course not confined to the workplace, as it is reflected in society as well. Our potential overreliance on machines to do human work, though something that has happened throughout history, puts us in a position to risk forgetting what makes us human. Amongst many other things, it is our propensity for error. Within our humanness, in this relationship with error, we sustain our relationships through grace. We have come to expect mistakes of human beings, not as something that should be eradicated, but reminders of our fallibility. Having to sit in our mistakes and the impact they have on others, has offered us pathways into growth and the ability to practice connection and community. We give each other multiple chances to 'get it right' because the fact of our humanity cannot be denied as it rests on ability, as much as inability. What we learn about others and ourselves through the experiences of error and the disappointment it brings is invaluable in the creation of cultures that are accommodating of our differences. When seen for what it is, human error signals the shifting limits of human ability, which is entitled to grace. When we are intentional about it, we create cultures that recognise grace as a necessity in conducting human relationships. As AI advances, and its many uses are presented in more sophisticated ways, we risk losing sight of the responsibilities that come with being human. When humans behave badly or make mistakes, accountability has been an appeal to people's humanity that says 'you have a responsibility to your fellow humans to correct yourself if you want to enjoy a place in the collective of society'. When given the option to shield oneself from that responsibility, what might seem like a liberation is actually a death of the most social parts that define us. Though I do not reject AI completely, I am concerned for what it will do to our social cultures that already hang together by a very precarious thread. After centuries of ongoing systemic oppressions that have destroyed social relations and estranged people from their humanity, we can ill-afford a shift to cultures of even less grace and accountability. We think that our survival depends on constant technological solutions to daily problems. While this is useful, we must let our humanity shape our technology, and not the other way around. Technology should be a reflection of our collective investment in improving our methods of co-existence, not a way out of doing the work to improve those methods of co-existence. At the heart of human existence is sociality. This sociality has been threatened many times before, and when the most oppressive conditions have descended upon people, they survived through relying on other people. Through the long history of struggle for the right to be human, this fact has remained. Remembering this demands different choices from us, and while we marvel at the pace of innovation and think through ways to capitalise on it, we must remember that it will one day, again, come back to needing another human. Jamil F. Khan is an award-winning author, doctoral critical diversity scholar, and research fellow at the Johannesburg Institute for Advanced Study.

IOL News
5 hours ago
- IOL News
ADvTECH expands footprint in Africa with the acquisition of Regis Runda Academy in Kenya
ADvTECH is a JSE-listed private education group. It has acquired the Regis Runda Academy in Nairobi, Kenya. Image: Supplied ADvTECH, the JSE-listed South African private education company that is growing its footprint in other African markets, said Thursday it is acquiring the Kenya-based Regis Runda Academy for about R172 million. The deal means that ADvTECH is further expanding its Makini Schools offering in Nairobi, Kenya. Situated in the fast-developing Runda area, northeast of Nairobi, the school, with a capacity of 2,000 students and a full K – 12 offering, will be rebranded as Makini Schools Runda, adding to the six schools in ADvTECH's Makini Schools. "We are delighted to increase our Makini Schools footprint in Kenya and to bring the brand's compelling proposition to parents and students in one of the fastest developing regions of Nairobi," said ADvTECH CEO Geoff Whyte. ADvTECH said it would invest in AI-powered digital learning tools and significant enhancements to sporting facilities at the Regis site, to elevate student experience and maximise academic outcomes. In November 2023, ADvTECH acquired Flipper International School in Addis Ababa, Ethiopia. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ Ad Loading In June this year, ADvTECH launched Rosebank International University College in Accra, Ghana, its first tertiary institution outside South Africa, with the first academic semester to start on September 1, 2025. A second enrolment cycle would start in February 2026. "Our prudent approach to capital allocation positions us well to navigate the uncertainties of South Africa's sluggish economy while capitalising on positive growth opportunities across the African continent. We are particularly excited about developments in Botswana, Kenya, and our recent expansions into Ghana and Ethiopia, which present significant potential for further growth and strategic diversification," ADvTECH chairperson Professor Alex Watson said in the 2024 integrated annual report. ADvTECH's share price was trading 0.61% lower at R31.11 on Thursday afternoon on the JSE, little changed from R29.36 a year ago. BUSINESS REPORT


The Citizen
7 hours ago
- The Citizen
Apple pledges additional $100 billion to make iPhone parts in US
Trump also announced 100% tariffs on computer chips and semiconductors manufactured outside the US. Apple will pledge an additional $100 billion to its initial $500 billion investment in the United States (US) over the next four years, US President Donald Trump said on Wednesday as he announced 100% tariffs on computer chips and semiconductors made outside the country. The US president announced the increased commitment at the White House alongside the tech giant's CEO Tim Cook, calling it 'the largest investment Apple has made in America'. 'Apple will massively increase spending on its domestic supply chain,' Trump added, highlighting a new production facility for the glass used to make iPhone screens in Kentucky. Apple investment Apple's investment comes after the tech giant already committed $500 billion to growing its domestic footprint earlier this year, as tariffs levied by Trump on US trading partners have cost Apple billions. It builds on plans announced in 2021, when the company founded by the legendary Steve Jobs said it would invest $430 billion in the country and add 20 000 jobs over the next five years. 'This year alone, American manufacturers are on track to make 19 billion chips for Apple in 24 factories across 12 different states,' Cook said in the Oval Office. Apple CEO Tim Cook publicly thanks Trump, praising his leadership and commitment to American innovation. You can almost hear the suction noises — The Vivlia (@TVivlia) August 7, 2025 ALSO READ: India overtakes China in smartphone exports to the United States iPhones Trump, who has pushed US companies to shift manufacturing home by slapping tariffs on trading partners, claimed that his administration was to thank for the investment, according to AFP. 'This is a significant step toward the ultimate goal of … ensuring that iPhones sold in the United States of America also are made in America,' Trump said. Cook later clarified that, while many iPhone components will be manufactured in the United States, the complete assembly of iPhones will still be conducted overseas. The iPhone is currently manufactured in China, but Apple has scaled up its production capacity in India over the last several years as a part of its 'China Plus One' strategy and has opted to dedicate most of its export capacity in India to supply the US market so far in 2025. 'If you look at the bulk of it, we're doing a lot of the semiconductors here, we're doing the glass here, we're doing the Face ID module here … and we're doing these for products sold elsewhere in the world,' Cook said. 100% tariffs Trump's announcement of 100% tariffs on computer chips and semiconductors made outside the US threatens to increase the cost of electronics made outside the country, which covers everything from TVs and video game consoles to kitchen appliances and cars. No date has been set for when they might come into force The new plan for tariffs on chips was 'good news for companies like Apple', Trump said. NOW READ: iStore turns 20 as mobile phone brands compete for SA market share