logo
#

Latest news with #UniversitiMalayaDepartmentofComputerSystemandTechnology

AI law needed to counter digital misuse
AI law needed to counter digital misuse

New Straits Times

time13 hours ago

  • Politics
  • New Straits Times

AI law needed to counter digital misuse

KUALA LUMPUR: A law on artificial intelligence (AI) is necessary to counter digital misuse, say cybersecurity experts. They said such laws, however, should ensure that humans remain the primary decision-makers, with the ability to step in, override decisions, or take control if an AI system makes a mistake. Universiti Malaya Department of Computer System and Technology professor, Prof Dr Ainuddin Wahid Abdul Wahab, said strong AI laws are also needed given the speed at which AI is developing and its growing integration into daily life. He said that without proper legislation, there is a significant risk of digital mishaps, abuse, and harm, including the proliferation of fake content such as images, videos, and documents, as well as cybersecurity threats. "AI helps a lot in daily tasks, but it can also be used by malicious actors to launch highly advanced cyberattacks, making traditional cybersecurity measures insufficient. "A compromised AI system itself could pose a major national security risk. "Another issue is how AI is trained. "There is a risk of biased data being used. "For example, if the training sample is not sufficiently balanced, an AI system used in hiring might unintentionally discriminate against certain demographic groups. "Similarly, an AI used in the judicial system might lead to harsher sentences for certain communities," he said when contacted. Earlier today, Minister in the Prime Minister's Department (Law and Institutional Reform) Datuk Seri Azalina Othman Said said Malaysia needs an artificial intelligence law in light of emerging threats. She said she has written to Digital Minister Gobind Singh Deo on the need to look into AI legislation. Ainuddin said that when drafting the Bill, he proposed the inclusion of a dedicated body to monitor AI, comprising experts in AI and law, or agencies such as the Malaysian Communications and Multimedia Commission (MCMC) and CyberSecurity Malaysia, to ensure compliance, investigate issues, and impose penalties on non-compliant companies. "Humans must remain the main actors. "For critical AI systems, there should always be a way for a human to step in, override decisions, or take control if the AI makes a mistake or if human judgment is necessary," he said. He said there should also be clear accountability in the event of an incident. "Who is responsible? Is it the company that created the AI tool, the company that uses it, or the end-user?" he said. Meanwhile, Universiti Sains Malaysia Cybersecurity Research Centre director Prof Dr Selvakumar Manickam said proactive legislation is essential to manage risks, prevent misuse, and build public trust in emerging technologies. He said that without a dedicated legal framework, Malaysia risks facing serious challenges from AI-driven threats such as deepfakes and algorithmic bias, which could leave citizens vulnerable and blur lines of accountability. "Legislation must mandate that security and privacy are engineered into AI systems and the data processes that build them, starting from the design phase. "These systems should only be deployed after meeting critical requirements for safety and transparency. "The law must strongly require human oversight as a non-negotiable component of any high-risk system, ensuring final decisions remain with humans and establishing clear lines of accountability enforced by a properly empowered regulator," he said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store