
Do universities need a rethink on AI use for students? Singapore's NTU case causes a stir
Singapore 's Nanyang Technological University (NTU) involving three students accused of academic misconduct over the use of
generative AI (Gen AI) has prompted observers to question if tertiary institutions need clearer processes to deal with disputes.
The students were told by their teacher in a briefing for a module on health, disease outbreaks and politics at the School of Social Sciences that the use of ChatGPT and AI tools was not allowed in the 'development or generation' of their essay proposal and long essay, according to pictures of the slide that were published online.
One student's appeal was being processed before a review panel that would include AI experts, while the other two scored zero for the assignment, local media reported. The students used online tools to organise their citations. The two who have been punished used ChatGPT in their research but said they did not use it to write their essays, according to local media outlet CNA.
Fong Wei Li, a lawyer at Forward Legal who specialises in internet and social media law, told This Week in Asia that the saga showed the gaps in processes among universities in dealing with such disputes between teachers and students.
'Most universities acknowledge that Gen AI is part of our lives, but what universities don't go further in doing is consistent framework about processes for grievances; if a faculty accuses a student of using Gen AI and the student disputes it,' Fong said.
'There has not been a critical mass for these kinds of disputes, but do we want to wait for something to happen like it did at NTU before there is a process if a student disputes?'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


South China Morning Post
4 hours ago
- South China Morning Post
Singapore to sell Louis Vuitton, Hermès bags seized from money launderers
Singapore is set to sell an eye-watering variety of luxury items confiscated from money launderers who were convicted in the country's biggest laundering case. Advertisement The local unit of consulting firm Deloitte has been appointed to manage the process of selling the non-cash assets, according to a statement on Tuesday by the police, who added they handed over more than 460 pieces of luxury goods and 58 pieces of gold bars to the firm this week. The items are among a broader haul of assets, including upscale real estate, cars and cash, seized in connection with the S$3 billion (US$2.3 billion) scandal that broke two years ago. Luxury watches seized during an anti-money laundering raid in Singapore. Photo: Singapore Police Force/EPA-EFE The gold bars, jade necklaces and over 10 luxury watches from brands like Richard Mille and Patek Philippe were displayed in a publicised handover. The event also featured dozens of luxury handbags, including Hermès handbags and a limited-edition yellow pumpkin-shaped Louis Vuitton bag created in collaboration with Japanese artist Yayoi Kusama. Since the scandal, Singapore has moved to stem the fallout, with authorities imposing financial penalties on several of the world's biggest banks for lapses related to the case. Private bankers have also been charged for their alleged involvement in the scandal. Vehicles seized during an anti-money laundering raid in Singapore in August 2023). Photo: EPA-EFE/Singapore Police Force The government has proposed to strip money launderers of directorships and has probed law firms embroiled in the case. Advertisement Authorities previously kick-started the process to sell the seized assets – which amounted to about S$2.79 billion at the end of 2024. While the bulk of it was in the form of cash and financial assets, numerous properties, vehicles and country club memberships have been sold. The proceeds are put in a de facto bank account of the Singapore government.


South China Morning Post
6 hours ago
- South China Morning Post
China rejects OpenAI's GPT-5 trademark application in blow to US firm's branding efforts
Chinese authorities have rejected OpenAI 's attempt to register the name of its new flagship artificial intelligence model, GPT-5 , as a trademark on the mainland, where the ChatGPT creator's products and services are not officially available. According to records on the website of the Trademark Office, under the China National Intellectual Property Administration, the US firm's application through subsidiary OpenAI OpCo was denied and is pending appeal. That was the latest rejection handed by the regulator to OpenAI. Last year, it denied a series of applications filed by the US start-up between March and November 2023 to register ChatGPT and GPT – covering AI models GPT-4, GPT-5, GPT-6 and GPT-7 – as trademarks on the mainland. These are still pending appeal. The Trademark Office's recent refusal dealt another blow to San Francisco-based OpenAI's efforts to protect its brand in the fast-developing and highly competitive AI industry. In February 2024, the United States Patent and Trademark Office denied OpenAI's applications to trademark ChatGPT and GPT. 'Registration is refused because the applied-for mark merely describes a feature, function, or characteristic of applicant's goods and services,' the regulator's ruling said. OpenAI did not immediately respond to a request for comment on Tuesday.


AllAfrica
13 hours ago
- AllAfrica
GPT-5: Has AI just plateaued?
Skip to content OpenAI CEO Sam Altman says GPT-5 is PhD-level general intelligence but that's not clearly the case. Photo: Aflo Co Ltd / Alamy OpenAI claims that its new flagship model, GPT-5, marks 'a significant step along the path to AGI' – that is, the artificial general intelligence that AI bosses and self-proclaimed experts often claim is around the corner. According to OpenAI's own definition, AGI would be 'a highly autonomous system that outperforms humans at most economically valuable work.' Setting aside whether this is something humanity should be striving for, OpenAI CEO Sam Altman's arguments for GPT-5 being a 'significant step' in this direction sound remarkably unspectacular. He claims GPT-5 is better at writing computer code than its predecessors. It is said to 'hallucinate' a bit less, and is a bit better at following instructions – especially when they require following multiple steps and using other software. The model is also apparently safer and less 'sycophantic', because it will not deceive the user or provide potentially harmful information just to please them. Altman does say that 'GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.' Yet it still doesn't have a clue about whether anything it says is accurate, as you can see from its attempt below to draw a map of North America. Sam Altman: With GPT-5, you'll have a PhD-level expert in any area you needMe: Draw a map of North America, highlighting countries, states, and capitalsGPT 5: *Sam Altman forgot to mention that the PhD-level expert used ChatGPT to cheat on all their geography classes… — Luiza Jarovsky, PhD (@LuizaJarovsky) August 10, 2025 It also cannot learn from its own experience, or achieve more than 42% accuracy on a challenging benchmark like 'Humanity's Last Exam', which contains hard questions on all kinds of scientific (and other) subject matter. This is slightly below the 44% that Grok 4, the model recently released by Elon Musk's xAI, is said to have achieved. The main technical innovation behind GPT-5 seems to be the introduction of a 'router'. This decides which model of GPT to delegate to when asked a question, essentially asking itself how much effort to invest in computing its answers (then improving over time by learning from feedback about its previous choices). The options for delegation include the previous leading models of GPT and also a new 'deeper reasoning' model called GPT-5 Thinking. It's not clear what this new model actually is. OpenAI isn't saying it is underpinned by any new algorithms or trained on any new data (since all available data was pretty much being used already). One might therefore speculate that this model is really just another way of controlling existing models with repeated queries and pushing them to work harder until it produces better results. It was back in 2017 when researchers at Google found out that a new type of AI architecture was capable of capturing tremendously complex patterns within long sequences of words that underpin the structure of human language. By training these so-called large language models (LLMs) on large amounts of text, they could respond to prompts from a user by mapping a sequence of words to their most likely continuation in accordance with the patterns present in the dataset. This approach to mimicking human intelligence became better and better as LLMs were trained on larger and larger amounts of data – leading to systems like ChatGPT. Ultimately, these models just encode a humongous table of stimuli and responses. A user prompt is the stimulus, and the model might just as well look it up in a table to determine the best response. Considering how simple this idea seems, it's astounding that LLMs have eclipsed the capabilities of many other AI systems – if not in terms of accuracy and reliability, certainly in terms of flexibility and usability. The jury may still be out on whether these systems could ever be capable of true reasoning, or understanding the world in ways similar to ours, or keeping track of their experiences to refine their behaviour correctly – all arguably necessary ingredients of AGI. In the meantime, an industry of AI software companies has sprung up that focuses on 'taming' general purpose LLMs to be more reliable and predictable for specific use cases. Having studied how to write the most effective prompts, their software might prompt a model multiple times, or use numerous LLMs, adjusting the instructions until it gets the desired result. In some cases, they might 'fine-tune' an LLM with small-scale add-ons to make them more effective. OpenAI's new router is in the same vein, except it's built into GPT-5. If this move succeeds, the engineers of companies further down the AI supply chain will be needed less and less. GPT-5 would also be cheaper to users than its LLM competitors because it would be more useful without these embellishments. At the same time, this may well be an admission that we have reached a point where LLMs cannot be improved much further to deliver on the promise of AGI. If so, it will vindicate those scientists and industry experts who have been arguing for a while that it won't be possible to overcome the current limitations in AI without moving beyond LLM architectures. OpenAI's new emphasis on routing also harks back to the 'meta reasoning' that gained prominence in AI in the 1990s, based on the idea of 'reasoning about reasoning.' Imagine, for example, you were trying to calculate an optimal travel route on a complex map. Heading off in the right direction is easy, but every time you consider another 100 alternatives for the remainder of the route, you will likely only get an improvement of 5% on your previous best option. At every point of the journey, the question is how much more thinking it's worth doing. This kind of reasoning is important for dealing with complex tasks by breaking them down into smaller problems that can be solved with more specialized components. This was the predominant paradigm in AI until the focus shifted to general-purpose LLMs. No more gold rush? Photo: JarTee via The Conversation It is possible that the release of GPT-5 marks a shift in the evolution of AI which, even if it is not a return to this approach, might usher in the end of creating ever more complicated models whose thought processes are impossible for anyone to understand. Whether that could put us on a path toward AGI is hard to say. But it might create an opportunity to move towards creating AIs we can control using rigorous engineering methods. And it might help us remember that the original vision of AI was not only to replicate human intelligence, but also to better understand it. Michael Rovatsos is professor of artificial intelligence, University of Edinburgh This article is republished from The Conversation under a Creative Commons license. Read the original article.