
Student boasts of using AI for assignments, sparks backlash
WHILE artificial intelligence (AI) is indeed useful in our daily lives for a multitude of reasons, particularly in our studies, over-reliance on it can affect students' essential human skills and future capabilities in the workforce.
A student studying at a university in Sarawak recently bragged about using AI to complete their assignments, even claiming to have achieved high scores.
'I pity the lecturers, especially the boomers. They try so hard to give us numerous assignments, even last-minute ones, but it's no trouble for us at all,' the student said in an anonymous confession on Facebook.
Furthermore, the student alleged that their lecturers were unaware of students using AI tools such as ChatGPT, DeepSeek, and Google Gemini.
'We can just close our eyes and complete these assignments in a day with the help of ChatGPT Premium, DeepSeek, and Google Gemini.
'The assignments turn out so well — even complete with citations and references.
'It's kind of sad that they can't outplay us,' the student added.
The post did not sit well with netizens, who criticised the student for their lack of effort and raised concerns about their readiness for the workforce after graduation.
Meanwhile, others agreed that while AI can be used as a guide for completing assignments, students should still develop essential skills during their university years — such as critical thinking, problem-solving, and research skills — which are highly valued by employers.
One user also pointed out that the well-known plagiarism detection tool, Turnitin is now capable of identifying AI-generated content in students' assignments, noting that lecturers are already 'a step ahead'.
'We lecturers know that students are using these AI tools. We're aware of the tricks you try to pull, and sometimes, we just want to see how well you use them,' a netizen commented.
'It's good that you manage to submit assignments on time and get superb marks, but in the real world, AI won't help you handle workplace stress, understand employers' instructions, or deliver results based on the few real skills you have,' another user wrote.
'All of a sudden, I feel proud of myself for completing my assignments without using AI,' another chimed in.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
8 hours ago
- The Star
Lawyers face sanctions for citing fake cases with AI, warns UK judge
FILE PHOTO: A message reading "AI artificial intelligence," a keyboard and robot hands are seen in this illustration created on January 27, 2025. REUTERS/Dado Ruvic/Illustration/File Photo LONDON (Reuters) -Lawyers who use artificial intelligence to cite non-existent cases can be held in contempt of court or even face criminal charges, London's High Court warned on Friday, in the latest example of generative AI leading lawyers astray. A senior judge lambasted lawyers in two cases who apparently used AI tools when preparing written arguments, which referred to fake case law, and called on regulators and industry leaders to ensure lawyers know their ethical obligations. "There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused," Judge Victoria Sharp said in a written ruling. "In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities ... and by those with the responsibility for regulating the provision of legal services." The ruling comes after lawyers around the world have been forced to explain themselves for relying on false authorities, since ChatGPT and other generative AI tools became widely available more than two years ago. Sharp warned in her ruling that lawyers who refer to non-existent cases will be in breach of their duty to not mislead the court, which could also amount to contempt of court. She added that "in the most egregious cases, deliberately placing false material before the court with the intention of interfering with the administration of justice amounts to the common law criminal offence of perverting the course of justice". Sharp noted that legal regulators and the judiciary had issued guidance about the use of AI by lawyers, but said that "guidance on its own is insufficient to address the misuse of artificial intelligence". (Reporting by Sam Tobin; Editing by Sachin Ravikumar)

Malay Mail
16 hours ago
- Malay Mail
OpenAI fights back in NYT copyright lawsuit over ChatGPT data freeze
NEW YORK, June 6 — OpenAI is appealing an order in a copyright case brought by the New York Times that requires it to preserve ChatGPT output data indefinitely, arguing that the order conflicts with privacy commitments it has made with users. Last month, a court said OpenAI had to preserve and segregate all output log data after the Times asked for the data to be preserved. 'We will fight any demand that compromises our users' privacy; this is a core principle,' OpenAI CEO Sam Altman said in a post on X yesterday. 'We think this (the Times demand) was an inappropriate request that sets a bad precedent.' US District Judge Sidney Stein was asked to vacate the May data preservation order on June 3, a court filing showed. The New York Times did not immediately respond to a request for comment outside regular business hours. The newspaper sued OpenAI and Microsoft in 2023, accusing them of using millions of its articles without permission to train the large language model behind its popular chatbot. Stein said in an April court opinion that the Times had made a case that OpenAI and Microsoft were responsible for inducing users to infringe its copyrights. The opinion explained an earlier order that rejected parts of an OpenAI and Microsoft motion to dismiss, saying that the Times' 'numerous' and 'widely publicised' examples of ChatGPT producing material from its articles justified allowing the claims to continue. — Reuters


Free Malaysia Today
a day ago
- Free Malaysia Today
OpenAI finds more Chinese groups using ChatGPT for malicious purposes
OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China. (AFP pic) SAN FRANCISCO : OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence (AI) technology for covert operations, which the ChatGPT maker described in a report released today. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said. Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio. OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms. In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID. Some content also criticised US President Donald Trump's sweeping tariffs, generating X posts, such as 'Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?'. In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation. A third example OpenAI found was a China-origin influence operation that generated polarised social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images. China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings. OpenAI has cemented its position as one of the world's most valuable private companies after announcing a US$40 billion funding round valuing the company at US$300 billion.