logo
#

Latest news with #NationalGuidelinesonAIGovernanceandEthics

Tweaks needed in existing laws to regulate AI, says minister
Tweaks needed in existing laws to regulate AI, says minister

The Star

time03-05-2025

  • The Star

Tweaks needed in existing laws to regulate AI, says minister

IPOH: Some existing laws may need to be amended to keep pace with advanced technology and abuse of artificial intelligence (AI), says Chang Lih Kang. The Science, Technology and Innovation Minister said while these laws can still be used to address abuse and misuse of technology and AI to commit cybercrimes, some tweaks are needed. "Currently, we still have existing laws that can be used, including the Malaysia Communications and Multimedia Act and the Penal Code. "These might need some amendment here and there so that we can keep pace... technology is too advance and fast, so we need to try and catch up," he said in a press conference after attending the Perak Ignite Entrepreneur Summit 2025 at SMJK Yuk Choy on Saturday (May 3). Chang said the country is still a long way behind in enacting a law to regulate AI. He said the National Guidelines on AI Governance and Ethics (AIGE) was only launched last year and would take some time to enact a law for AI. "We've consulted with various industry players, they agree that AIGE is something that should be enacted but it will take time. "This is the same in other parts of the world, as many do not have such laws, except in the European Union," he said, adding that the AIGE would be used to regulate the AI industry. "This (AI) is still a new area whereby we are unsure whether some things are right or wrong, and we need to explore it together. "There is no clear time frame yet (for an AI law) but that is our eventual goal, for a law that can be enforced," he added.

Creativity's survival in AI era hinges on having right guardrails
Creativity's survival in AI era hinges on having right guardrails

New Straits Times

time26-04-2025

  • Entertainment
  • New Straits Times

Creativity's survival in AI era hinges on having right guardrails

THE recent Hari Raya celebrations saw a new trend on social media. Alongside family photos and festive greetings were artificial intelligence-generated portraits. These showed families and loved ones transformed into Studio Ghibli film characters. The appeal is as obvious as it is understandable. These pieces of "art" are novel, accessible and beautiful. They are created in seconds with just a few typed words and an uploaded photo. But what does this mean for creativity, ownership and consent? What about respect for artists, or the ethical use of AI in content creation? AI tools are trained on vast datasets, often scraped from the work of human creators without their knowledge or consent. While AI may offer instant gratification, it skips the emotional journey of creation — the late nights, the revisions, and the inspiration drawn from the artist's own life or loved ones. In this regard, AI doesn't create art; it performs a version of it. A simulation of creativity, not the soul of it. We used to make fan art. Now, we are the art. The line between imagination and imitation is blurring fast due to AI. Still, AI, when developed and used responsibly, can be a powerful force for good. It has the potential to equip creators with new tools, offer audiences fresh perspectives and expand how knowledge is created and shared. But unlocking that potential means putting ethics on equal footing with innovation. Responsible AI isn't just about smart engineering; it's about building fairness, transparency and accountability into the DNA of these tools from the start. Transparency is likewise critical. Audiences deserve to know whether the content they are engaging with is human-created or machine-generated. Transparency isn't just about labelling. It's about trust. When people can't tell the source of what they're consuming, the lines of authenticity begin to blur. Clear disclosures give audiences the context to engage critically, not passively. To move forward responsibly, we need accountability frameworks that span the AI lifecycle, from development and deployment to use and impact. Encouragingly, global and local initiatives are emerging to address ethical concerns about AI and its use. International organisations like Unesco, the Partnership on AI and AI Now Institute have outlined principles promoting transparency, fairness and human oversight. Closer to home, Malaysia has begun paving its own path. The Science, Technology and Innovation Ministry has issued the National Guidelines on AI Governance and Ethics, promoting responsible development practices and reinforcing the importance of ethical guardrails. At the same time, the Content Forum is updating the Content Code, a cornerstone of Malaysia's industry-led self-regulation, first introduced in 2004 and last revised in 2022. This revision aims to ensure the code remains fit for purpose in a digital landscape increasingly shaped by generative AI. It's a timely effort to ensure ethical standards evolve alongside technological capabilities, not behind them. We are inviting stakeholders — industry players, creators and the public — to shape the next evolution of the code. Everyone is encouraged to share their insights, concerns and ideas via the feedback drive portal at which will remain open till May 31. Every submission will be reviewed, and a public consultation session will follow to ensure the updated code reflects shared values and real-world needs. As the code evolves, we must look beyond today's concerns and prepare for tomorrow's content realities, where trust is currency and integrity the compass. We are witnessing exciting times. Generative AI is reshaping how we create, connect and communicate. It offers exciting new tools that can amplify human potential. But like any powerful tool, its true impact depends on how we choose to use it. The future of creativity isn't just about what we can do with AI — it's about what we should do with it. We must not confuse imitation with inspiration, or speed for substance. With the right guardrails, we can make room for technology without pushing human expression to the margins. After all, the most meaningful stories are still the ones only people can tell.

Spotlight on dangers of over-reliance on AI
Spotlight on dangers of over-reliance on AI

The Sun

time26-04-2025

  • The Sun

Spotlight on dangers of over-reliance on AI

PETALING JAYA: A string of slip-ups in images of the Jalur Gemilang involving a national paper, global expo and ministry report has shone a spotlight on the risks of unchecked AI use. Universiti Malaysia Kelantan Institute for Artificial Intelligence and Big Data (Aibig) director Dr Muhammad Akmal Remli said such errors point to flaws in how global AI models are trained and deployed, especially when it comes to culturally specific content. 'Inaccuracies like a wrongly rendered Jalur Gemilang happen because the AI model may not have had sufficient exposure to correct representations of Malaysian symbols during training,' he said, adding that while some AI tools perform impressively when generating generic content, they often falter with highly specific cultural or national elements such as flags. 'AI can generate a wide range of content such as text, voice, images and videos based on prompts. But what many do not realise is when AI is asked to create an image of a classroom with a Malaysian flag, the outcome depends on how the AI interprets those prompts through numerical tokens and what it has learnt from its training data.' Muhammad Akmal said this reflects a broader issue – many generative AI models are built and trained within global frameworks that often under-represent countries like Malaysia. 'Global AI models frequently lack sufficient regional and cultural training data. There is an opportunity here for Malaysia to develop its own AI systems.' He said with government backing, local startups and tech companies could step up and train models using Malaysian data involving cultural symbols, traditions and languages. Muhammad Akmal also said using AI in government and media settings, especially for public content, requires greater caution. 'Incidents like these are a wake-up call. We must use AI responsibly, not just chase trends.' He emphasised the need for safeguards at several levels, including determining if AI-generated content is even necessary, and conducting rigorous reviews before publication. 'Human oversight is not optional. It is essential.' While some have called for new laws to regulate AI, Muhammad Akmal said Malaysia already has a framework in place. The National Guidelines on AI Governance and Ethics issued by the Science, Technology and Innovation Ministry last year aim to encourage responsible AI use across all sectors. 'Instead of piling on new rules, the focus should be on tightening implementation through proper training and awareness, among government and media professionals,' he said, adding that AI should complement and not replace human decision-making. 'AI is a tool. It can help spark ideas or automate tasks, but humans must still lead. Particularly with editorial or official content, relying solely on AI without verifying the output could result in serious slip-ups. 'Experts can spot errors AI might overlook. This collaboration delivers efficiency without compromising accuracy, which is critical when dealing with culturally or nationally sensitive content.' He also urged developers to improve the quality and diversity of training data. 'Biases or inaccuracies in datasets will inevitably surface in AI output. Developers must aim for high-quality, representative data, especially in culturally sensitive areas.' Despite the recent flag-related blunders, Muhammad Akmal believes public confidence in AI remains intact. 'I don't think trust in AI has been lost. Most people will likely see this as a human oversight. But it's a timely reminder that working with AI requires extra care, especially when national identity is involved.' To promote responsible AI use, he called for proactive public engagement. 'Institutions should hold dialogues, training sessions and awareness campaigns for the public on responsible AI practices. 'At Aibig, we run regular training programmes to equip participants with best practices in AI safety and ethics.' Muhammad Akmal said as Malaysia advances into the digital era, AI can be a powerful ally, but only when guided by human judgement, local insight and ethical responsibility.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store