
Ethical fault line: Integrity in the age of AI
Yet, as emphasised in the recent Awani AI Roundtable, the question is no longer whether AI will transform the media and journalism landscape. The real issue is whether we can control its power without eroding the skills and values that keep society's heartbeat steady.
This question arises because AI has already begun reshaping journalism's possibilities. From producing multi-platform content to streamlining newsroom operations, it offers near-limitless capacity to strengthen the profession.
The sky's the limit sentiment resonates globally as AI can analyse trends, translate languages in seconds, edit videos and draft articles in moments. Used wisely, it can elevate journalistic thinking, freeing reporters to focus on deep investigations, data analysis and creative storytelling.
Yet, this potential comes with a cautionary note: technology is never a neutral force, for it carries the biases, values and blind spots of its creators and controllers.
That is why we must be careful. Technology has long been cast as society's saviour, particularly for communities marginalised by geography, economics or politics.
AI must remain a tool, not a belief system. Like any tool, AI can be used for either ethical or harmful purposes. In the wrong hands, it risks reinforcing power imbalances, spreading disinformation and establishing media monopolies which are the challenges that require critical human oversight.
Oversight matters because the greatest threat may not be spectacular scandals like fake news or deepfakes, but the quiet surrender of our ability to think critically.
When AI delivers answers faster than human reasoning, it tempts us to bypass questioning, doubting and innovating.
This vulnerability is amplified in education systems that lead compliance over curiosity. Without reform, AI could fill this intellectual space and produce a generation proficient at writing prompts but unable to thoroughly evaluate information.
Such risks deepen when AIs development and control rest in the hands of a few global tech giants. This concentration of power risks creating a new form of media colonialism where algorithms shaping our news flows are designed to serve commercial or political interests far beyond our borders.
For Malaysia, this is more than an abstract concern. Over-reliance on foreign AI tools, without building local Large Language Models (LLMs), could erode our narrative sovereignty.
Journalism must not remain a passive consumer of imported technology; it must become a co-creator, ensuring AI reflects local contexts, languages and values. Without this, our stories risk being reframed to suit someone else's agenda.
This concentration of technological power also reshapes the information supply chain, from data collection to content distribution, in ways that are increasingly automated and centralised.
Smaller players are pushed aside, while the speed of AI-generated disinformation outpaces human fact-checking. If convincing fake videos can be produced in minutes, we will need verification systems of equal sophistication and transparency so that truth does not perpetually trail behind lies.
These risks intersect with another shift: the changing relationship between younger audiences and news.
One of the most striking points from the Awani AI Roundtable was that many in Generation Z, despite being digitally savvy, actively practise news avoidance, turning away from traditional reporting altogether.
This detachment leaves them more vulnerable to the echo chambers of unimportant content and viral falsehoods. Yet, their deep familiarity with digital tools also positions them to reinvent journalism for the platforms they inhabit.
To tap into this potential, we must draw young people into journalism not as content creators chasing virality, but as storytellers serving the public interest.
This requires education reforms that integrate AI literacy, ethical reasoning and investigative skills into journalism curricula. AI can accelerate their work, but it cannot replace the human labour of uncovering truths and connecting them to communities.
Such preparation is important because the ethical fault line in journalism has never been about speed or efficiency; it is about integrity. Without firm ethical guidelines, AI could normalise corrupt, irrelevant or dangerous practices.
The risk of ethical outsourcing emerges when we assume tech companies have embedded fairness and accountability into their systems. They have not. AI mirrors human prejudices and at times, amplifies them. Only attentive human oversight can close these gaps.
This raises an urgent question: how should AI be regulated? Should the rules be set globally, nationally or through a blend of both? Global frameworks offer consistency but often fail to account for local realities.
National regulations can be more responsive but are vulnerable to political manipulation. A hybrid model, blending international norms with local enforcement and shared accountability between public and private sectors, may be the most practical path.
For journalism, regulation must go beyond technical standards. It requires clear rules on transparency, such as labelling AI-generated content; consent, including whose data is used to train AI and correction ensuring harmful outputs can be swiftly removed.
This leads to a central principle that AI should not replace journalists but work alongside them. Imagine AI-driven fact-checking tools flagging inaccuracies in real time or algorithms suggesting diverse sources to prevent one-dimensional reporting.
These tools can strengthen journalism but only when paired with cultural practices that prioritize human empathy, curiosity and trust; in this way, AI complements rather than replaces journalism.
The most effective partnerships occur when AI supports, rather than seizes, the creative process. Newsrooms must be proactive, with clear guidelines covering transparency, accountability and regular bias audits.
Journalists should be trained not only in AIs technical uses but also in its ethical and social implications. Crucially, AI can be used to better serve marginalised audiences, delivering tailored content without sacrificing integrity or succumbing to sensationalism.
Ultimately, AI will reshape the media industry but the deeper question is whether it will reshape us. Generation Z will inherit leadership roles and their approach to AI will determine journalisms trajectory for decades. They must be equipped to see the forest of social consequences, not just the trees of technological proficiency.
In our rush to embrace AI, we must remember what makes journalism irreplaceable: the human ability to ask hard questions, challenge power and connect personal stories to the public good. AI can enhance this mission, but it cannot lead it.
As the Awani AI Roundtable concluded, the future of journalism is not an AI-versus-human contest, but a fusion of humans using AI wisely. The sky may indeed be the limit, but our anchor must remain grounded in ethics, community and reality.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Sun
5 hours ago
- The Sun
AI must remain assistive tool not decision maker says former judge
KUALA LUMPUR: Artificial intelligence must remain an assistive tool rather than a decision-maker according to former Federal Court judge Tan Sri Harmindar Singh Dhaliwal. He emphasized that any AI impacting individual rights requires rigorous testing and clear oversight before deployment. Harmindar stated AI possesses transformative potential for human endeavor but must not replace human judgment particularly within criminal justice systems. AI should be an assistant, not a decision maker he declared during his plenary session at the ASEAN Law Forum 2025. He cited a Sabah case where an AI sentencing tool recommended ten months imprisonment with fifty-four percent probability for a drug offence. The magistrate instead imposed twelve months imprisonment demonstrating human judgment must prevail. Harmindar chairs the Criminal Law Review Committee and stressed the necessity for proper rules and safeguards. A clear risk-based approach is essential before adopting AI in judicial processes he emphasized. Singapore has already issued guidelines for AI use in courts he noted while the Philippines drafts its judiciary AI governance framework. Malaysia and other ASEAN members must follow suit with their own frameworks he urged. A framework ensuring responsible AI use requires risk assessments and mitigation measures he explained. AI use cases must undergo testing for bias, reliability and transparency before adoption he added. He cautioned that AI's criminal justice application demands vigilance despite being powerful technology. We must always remain accountable he asserted regarding AI integration into justice systems. Approaching AI with eyes wide open prepares us to address risks and reap benefits he concluded. Colin Seow of Colin Seow Chambers provided Singapore's perspective on AI implementation. The Singapore Police Force uses AI in digital forensics to automatically sift seized devices for abuse material. This integration significantly improves investigation speed and accuracy he confirmed. AI deployment extends to Singapore's financial sector through the COSMIC platform he added. The Monetary Authority of Singapore launched COSMIC last year as a public-private data-sharing initiative. Major banks and financial institutions participate in this collaborative information sharing platform. AI analyses transaction patterns within COSMIC enabling proactive suspicious activity detection. Banks can flag concerning activities across institutions while safeguarding customer confidentiality he explained. Legislative changes support this initiative allowing information sharing strictly for crime prevention. The three-day ASEAN Law Forum featured fifty-eight speakers and moderators across fifteen sessions. Over three hundred participants from ASEAN and beyond attended the expert-led discussions. Topics covered alternative dispute resolution and commercial law reforms alongside human rights. - Bernama


Free Malaysia Today
7 hours ago
- Free Malaysia Today
Baidu posts quarterly revenue drop as consumption flags
Baidu achieved revenue of US$4.56 billion in Q2 2025. (EPA Images pic) BEIJING : Chinese internet giant Baidu recorded a slight drop in quarterly revenue today, dragged down by a persistent slump in domestic spending as its push into artificial intelligence (AI) accelerates. Beijing-based Baidu, the operator of China's top search engine, generates a significant proportion of its revenue from online ads, making its performance highly susceptible to fluctuations in the country's spending patterns. The firm achieved revenue of ¥32.7 billion (US$4.56 billion) during the second quarter (Q2) of 2025, down 4% year-on-year, according to a statement to the Hong Kong Stock Exchange. Revenue from online marketing during the period was down 15% year-on-year to ¥16.2 billion, the statement showed. China is facing an uncertain economic outlook as cautious consumers navigate a years-long downturn in the property market, high unemployment and trade tensions with Washington. Retail sales – a key gauge of consumer demand in China – grew at a slower rate in July than expected, official data showed on Friday. Following years of tight regulation of the vast Chinese tech sector, Beijing is hoping that recent advancements in AI will provide the spark needed to jumpstart the domestic economy. Baidu also said today that its net profit during Q2 was ¥7.3 billion – a 33% jump year-on-year but down 5% from the previous quarter. The company has invested heavily in AI, placing it in an increasingly competitive race alongside China's other tech giants Tencent, Alibaba and ByteDance. It has also sought to advance its autonomous 'robotaxi' services abroad. Baidu and Lyft announced plans this month to launch robotaxis on the rideshare app in Germany and Britain in 2026, pending regulatory approval. The firm said in a joint statement with Uber in July that it plans to offer driverless cars on the Lyft competitor's app in Asia and the Middle East this year.


New Straits Times
10 hours ago
- New Straits Times
Ethical fault line: Integrity in the age of AI
WHEN many affirmed it as the next great leap in human innovation, on par with the internet's emergence in the 1990s or television debut in the 1950s. Yet, as emphasised in the recent Awani AI Roundtable, the question is no longer whether AI will transform the media and journalism landscape. The real issue is whether we can control its power without eroding the skills and values that keep society's heartbeat steady. This question arises because AI has already begun reshaping journalism's possibilities. From producing multi-platform content to streamlining newsroom operations, it offers near-limitless capacity to strengthen the profession. The sky's the limit sentiment resonates globally as AI can analyse trends, translate languages in seconds, edit videos and draft articles in moments. Used wisely, it can elevate journalistic thinking, freeing reporters to focus on deep investigations, data analysis and creative storytelling. Yet, this potential comes with a cautionary note: technology is never a neutral force, for it carries the biases, values and blind spots of its creators and controllers. That is why we must be careful. Technology has long been cast as society's saviour, particularly for communities marginalised by geography, economics or politics. AI must remain a tool, not a belief system. Like any tool, AI can be used for either ethical or harmful purposes. In the wrong hands, it risks reinforcing power imbalances, spreading disinformation and establishing media monopolies which are the challenges that require critical human oversight. Oversight matters because the greatest threat may not be spectacular scandals like fake news or deepfakes, but the quiet surrender of our ability to think critically. When AI delivers answers faster than human reasoning, it tempts us to bypass questioning, doubting and innovating. This vulnerability is amplified in education systems that lead compliance over curiosity. Without reform, AI could fill this intellectual space and produce a generation proficient at writing prompts but unable to thoroughly evaluate information. Such risks deepen when AIs development and control rest in the hands of a few global tech giants. This concentration of power risks creating a new form of media colonialism where algorithms shaping our news flows are designed to serve commercial or political interests far beyond our borders. For Malaysia, this is more than an abstract concern. Over-reliance on foreign AI tools, without building local Large Language Models (LLMs), could erode our narrative sovereignty. Journalism must not remain a passive consumer of imported technology; it must become a co-creator, ensuring AI reflects local contexts, languages and values. Without this, our stories risk being reframed to suit someone else's agenda. This concentration of technological power also reshapes the information supply chain, from data collection to content distribution, in ways that are increasingly automated and centralised. Smaller players are pushed aside, while the speed of AI-generated disinformation outpaces human fact-checking. If convincing fake videos can be produced in minutes, we will need verification systems of equal sophistication and transparency so that truth does not perpetually trail behind lies. These risks intersect with another shift: the changing relationship between younger audiences and news. One of the most striking points from the Awani AI Roundtable was that many in Generation Z, despite being digitally savvy, actively practise news avoidance, turning away from traditional reporting altogether. This detachment leaves them more vulnerable to the echo chambers of unimportant content and viral falsehoods. Yet, their deep familiarity with digital tools also positions them to reinvent journalism for the platforms they inhabit. To tap into this potential, we must draw young people into journalism not as content creators chasing virality, but as storytellers serving the public interest. This requires education reforms that integrate AI literacy, ethical reasoning and investigative skills into journalism curricula. AI can accelerate their work, but it cannot replace the human labour of uncovering truths and connecting them to communities. Such preparation is important because the ethical fault line in journalism has never been about speed or efficiency; it is about integrity. Without firm ethical guidelines, AI could normalise corrupt, irrelevant or dangerous practices. The risk of ethical outsourcing emerges when we assume tech companies have embedded fairness and accountability into their systems. They have not. AI mirrors human prejudices and at times, amplifies them. Only attentive human oversight can close these gaps. This raises an urgent question: how should AI be regulated? Should the rules be set globally, nationally or through a blend of both? Global frameworks offer consistency but often fail to account for local realities. National regulations can be more responsive but are vulnerable to political manipulation. A hybrid model, blending international norms with local enforcement and shared accountability between public and private sectors, may be the most practical path. For journalism, regulation must go beyond technical standards. It requires clear rules on transparency, such as labelling AI-generated content; consent, including whose data is used to train AI and correction ensuring harmful outputs can be swiftly removed. This leads to a central principle that AI should not replace journalists but work alongside them. Imagine AI-driven fact-checking tools flagging inaccuracies in real time or algorithms suggesting diverse sources to prevent one-dimensional reporting. These tools can strengthen journalism but only when paired with cultural practices that prioritize human empathy, curiosity and trust; in this way, AI complements rather than replaces journalism. The most effective partnerships occur when AI supports, rather than seizes, the creative process. Newsrooms must be proactive, with clear guidelines covering transparency, accountability and regular bias audits. Journalists should be trained not only in AIs technical uses but also in its ethical and social implications. Crucially, AI can be used to better serve marginalised audiences, delivering tailored content without sacrificing integrity or succumbing to sensationalism. Ultimately, AI will reshape the media industry but the deeper question is whether it will reshape us. Generation Z will inherit leadership roles and their approach to AI will determine journalisms trajectory for decades. They must be equipped to see the forest of social consequences, not just the trees of technological proficiency. In our rush to embrace AI, we must remember what makes journalism irreplaceable: the human ability to ask hard questions, challenge power and connect personal stories to the public good. AI can enhance this mission, but it cannot lead it. As the Awani AI Roundtable concluded, the future of journalism is not an AI-versus-human contest, but a fusion of humans using AI wisely. The sky may indeed be the limit, but our anchor must remain grounded in ethics, community and reality.