
Varsities boost efforts to address academic integrity
PETALING JAYA: With artificial intelligence tools increasingly used by students to complete assignments, academic institutions are stepping up efforts to detect AI-generated content, viewing it as a growing threat to academic integrity.
Universiti Teknologi Mara College of Computing, Informatics and Mathematics (computer science) head Assoc Prof Dr Nor Shahniza Kamal Bashah said while AI has the potential to enhance learning, its use raises important questions about integrity, policy and fairness.
She noted that academics recognise AI's benefits, such as helping generate engaging content, guiding students to accurate answers and correcting coding errors.
'Currently, there's no specific policy or guideline governing responsible or acceptable AI use in academic work.
'Institutions typically monitor the similarity index to ensure it remains below 30%, but now many also use AI detectors to assess how much of a student's work may have been generated by tools such as ChatGPT.'
Nor Shahniza said students at the university are required to use Turnitin, which now detects both similarity and the percentage of AI-generated content.
'In most cases, if a student's work shows a high percentage of AI-generated text, they will be asked to revise and resubmit their assignment until the AI score is brought down.'
She warned that misusing AI, such as submitting AI-generated work without disclosure, relying entirely on it to complete assignments or using it to bypass learning objectives, undermines academic integrity.
'Such actions can violate academic policies and may result in disciplinary consequences, similar to plagiarism.
'AI is here to stay, but so is the importance of academic honesty. As students navigate this new landscape, learning to use AI wisely is essential to remain innovative and ethical.'
Theatre student Alini Anak Dolly, 22, said she occasionally uses tools such as ChatGPT or Google Gemini, mainly to generate ideas and improve her writing.
'I think AI helps me understand certain topics better because it explains things in ways that suit my learning style. Not everyone processes information the same way.'
She is aware of her university's policy on AI use and believes it is fair, valuing the balance between using technology and building her own skills.
Directing in Film student Muhammad Azim Irfan Bahtiar, 22, shared that he often turns to AI tools when struggling to begin assignments.
'Sometimes I also ask it to explain theories or terms I don't understand. It's like a study buddy that guides me or explains things better than some textbooks.'
Commenting on the use of AI detectors by lecturers, he said the aim is to ensure students do not simply copy AI-generated answers.
'But not all AI use is cheating. Sometimes, detectors aren't always accurate and can flag original work unfairly. Instead of relying solely on these tools, it's better to teach students responsible AI use and foster mutual trust.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New Straits Times
an hour ago
- New Straits Times
Calling for ethical and responsible use of AI
LETTERS: In an era where artificial intelligence (AI) is rapidly shaping every facet of human life, it is critical that we ensure this powerful technology is developed and deployed with a human-centric approach. AI holds the potential to solve some of humanity's most pressing challenges, from healthcare innovations to environmental sustainability, but it must always serve the greater good. To humanise AI is to embed ethical considerations, transparency, and empathy into the heart of its design. AI is not just a tool; it reflects the values of those who create it. Therefore, AI development should prioritise fairness, accountability, and inclusivity. This means avoiding bias in decision-making systems, ensuring that AI enhances human potential rather than replacing it, and making its' benefits accessible to all, not just a select few. Governments, industries, and communities must work together to create a governance framework that fosters innovation while protecting privacy and rights. We must also emphasise the importance of educating our workforce and future generations to work alongside AI, harnessing its capabilities while maintaining our uniquely human traits of creativity, compassion, and critical thinking. As AI continues to transform the way we live, work, and interact, it is becoming increasingly urgent to ensure that its development and use are grounded in responsibility, accountability, and integrity. The Alliance for a Safe Community calls for clear, forward-looking regulations and a comprehensive ethical framework to govern AI usage to safeguard the public interest. AI technologies are rapidly being adopted across sectors — from healthcare and education to finance, law enforcement, and public services. While these advancements offer significant benefits, they also pose risks, including: • Invasion of privacy and misuse of personal data; • Algorithmic bias leading to discrimination or injustice; • Job displacement and economic inequality; • Deepfakes and misinformation Without proper regulation, AI could exacerbate existing societal challenges and even introduce new threats. There must be checks and balances to ensure that AI serves humanity and does not compromise safety, security, or fundamental rights. We propose the following elements as part of a robust regulatory framework: 1. AI Accountability Laws – Define legal responsibility for harm caused by AI systems, especially in high-risk applications. 2. Transparency and Explainability – Mandate that AI decisions affecting individuals (e.g., in hiring, credit scoring, or medical diagnoses) must be explainable and transparent. 3. Data Protection and Privacy Standards – Strengthen data governance frameworks to prevent unauthorised access, misuse, or exploitation of personal data by AI systems. 4. Risk Assessment and Certification – Require pre-deployment risk assessments and certification processes for high-impact AI tools. 5. Public Oversight Bodies – Establish independent agencies to oversee compliance, conduct audits, and respond to grievances involving AI. Technology alone cannot determine what is right or just. We must embed ethical principles into every stage of AI development and deployment. A Code of Ethics should include: • Human-Centric Design – AI must prioritise human dignity, autonomy, and well-being. • Non-Discrimination and Fairness – AI systems must not reinforce or amplify social, racial, gender, or economic bias. • Integrity and Honesty – Developers and users must avoid deceptive practices and be truthful about AI capabilities and limitations. • Environmental Responsibility – Developers should consider the energy and environmental impact of AI technologies. • Collaboration and Inclusivity – The development of AI standards must include voices from all segments of society, especially marginalised communities. AI is one of the most powerful tools of our time. Like any powerful tool, it must be handled with care, guided by laws, and shaped by ethical values. We urge policymakers, tech leaders, civil society, and global institutions to come together to build a framework that ensures AI is safe, inclusive, and used in the best interest of humanity. The future of AI should not be one where technology dictates the terms of our humanity. Instead, we must chart a course where AI amplifies our best qualities, helping us to live more fulfilling lives, build fairer societies, and safeguard the well-being of future generations. Only by humanising AI can we ensure that its promise is realised in a way that serves all of mankind.


The Star
14 hours ago
- The Star
Lawyers face sanctions for citing fake cases with AI, warns UK judge
FILE PHOTO: A message reading "AI artificial intelligence," a keyboard and robot hands are seen in this illustration created on January 27, 2025. REUTERS/Dado Ruvic/Illustration/File Photo LONDON (Reuters) -Lawyers who use artificial intelligence to cite non-existent cases can be held in contempt of court or even face criminal charges, London's High Court warned on Friday, in the latest example of generative AI leading lawyers astray. A senior judge lambasted lawyers in two cases who apparently used AI tools when preparing written arguments, which referred to fake case law, and called on regulators and industry leaders to ensure lawyers know their ethical obligations. "There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused," Judge Victoria Sharp said in a written ruling. "In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities ... and by those with the responsibility for regulating the provision of legal services." The ruling comes after lawyers around the world have been forced to explain themselves for relying on false authorities, since ChatGPT and other generative AI tools became widely available more than two years ago. Sharp warned in her ruling that lawyers who refer to non-existent cases will be in breach of their duty to not mislead the court, which could also amount to contempt of court. She added that "in the most egregious cases, deliberately placing false material before the court with the intention of interfering with the administration of justice amounts to the common law criminal offence of perverting the course of justice". Sharp noted that legal regulators and the judiciary had issued guidance about the use of AI by lawyers, but said that "guidance on its own is insufficient to address the misuse of artificial intelligence". (Reporting by Sam Tobin; Editing by Sachin Ravikumar)


The Star
16 hours ago
- The Star
Broadcom shares drop as revenue forecast fails to impress
A smartphone with a displayed Broadcom logo is placed on a computer motherboard in this illustration taken March 6, 2023. REUTERS/Dado Ruvic/Illustration/File Photo (Reuters) - Broadcom shares fell nearly 4% in premarket trading on Friday, after the company's third-quarter revenue forecast failed to impress investors who have been extremely bullish on chip stocks amid an artificial intelligence boom. The Palo Alto, California-based company, which supplies semiconductors to Apple and Samsung, provides advanced networking gear that allows vast amounts of data to travel across AI data centers, making its chips crucial for the development of generative AI technology. Broadcom forecast third-quarter revenue of around $15.80 billion, compared with analysts' average estimate of $15.71 billion, according to data compiled by LSEG. "High expectations drove a bit of downside," Bernstein analyst Stacy Rasgon said in a note. Broadcom also helps design custom AI processors for large cloud providers, which compete against Nvidia's pricey off-the-shelf chips. Global chipmakers, including Nvidia, have been vulnerable to U.S. President Donald Trump's shifting trade policy and export curbs as Washington attempts to limit Beijing's access to advanced U.S. technology. "AVGO is ramping two additional customers, but they are still small. So the processor business will grow this year, but at a measured rate," said Morgan Stanley. Last week, rival Marvell Technology forecast second-quarter revenue above Wall Street estimate, betting on strong demand for its custom chips powering AI workload in data centers. Broadcom's valuation had crossed $1 trillion for the first time in December after it forecast massive expansion in demand for chips that power AI. Its shares have risen about 12% so far this year. It has a 12-month forward price-to-earnings ratio of 35.36, compared with Marvell's 20.63, according to data compiled by LSEG. (Reporting by Twesha Dikshit in Bengaluru; Editing by Shilpi Majumdar)