
Malaysia faces sharp rise in AI-driven cyber threats, Fortinet warns
54 per cent of organisations experienced a twofold increase in AI-enabled threats while 24 per cent saw a threefold surge in the past year.
02 Jun 2025 03:02pm
Malaysia is facing a sharp rise in artificial intelligence (AI)-driven cyber threats, with nearly 50 per cent of organisations reporting incidents involving AI-powered attacks. Photo for illustrative purposes only - Canva
KUALA LUMPUR - Malaysia is facing a sharp rise in artificial intelligence (AI)-driven cyber threats, with nearly 50 per cent of organisations reporting incidents involving AI-powered attacks, according to a survey commissioned by global cybersecurity firm Fortinet.
The survey, conducted by the International Data Corporation (IDC) across 11 Asia-Pacific (APAC) countries, found that in Malaysia, 54 per cent of organisations experienced a twofold increase in AI-enabled threats while 24 per cent saw a threefold surge in the past year.
Fortinet Malaysia Country Manager Kevin Wong said cybercriminals are increasingly leveraging AI to develop and launch attacks more quickly and effectively, moving beyond traditional methods of manual coding.
"To give a sense of scale, there are up to 36,000 scam attempts occurring every second through automation, with 97 billion exploitation attempts recorded in the first half of last year alone, and AI is amplifying this trend by two to three times.
"In Malaysia, the surge in AI-driven threats is evident, with over 100 billion records stolen and traded on the dark web according to IDC," he told a media briefing on Thursday.
He noted that credential theft has spiked by more than 500 per cent within a year, with AI-powered phishing attacks becoming increasingly targeted and difficult to detect. "Traditional tools simply can't keep up, as fast-paced, AI-powered threats demand an equally fast and intelligent response, and that's where AI also plays a role on the defensive side," he said.
Wong also noted that cyber risk has evolved from being an occasional concern to a constant and ongoing challenge.
"With the rise of AI-powered threats, the nature of cyber risk itself has changed from something we respond to after it happens to something we must act on before it occurs. That is why we partnered with IDC to better understand how security leaders across Asia are navigating this evolving threat landscape, the challenges they face, and the critical gaps in organisational readiness," he said.
Meanwhile, Fortinet's vice president of marketing and communications for Asia /Australia and New Zealand (ANZ), Rashish Pandey, said cybersecurity investment in Malaysia remains disproportionately low, with an average of only 15 per cent of IT budgets allocated to cybersecurity, representing just over 1 per cent of total revenue. Malaysia is facing a sharp rise in artificial intelligence (AI)-driven cyber threats, with nearly 50 per cent of organisations reporting incidents involving AI-powered attacks. Photo for illustrative purposes only - Canva
"The reason cybersecurity investment remains low is that we still struggle to clearly articulate its business impact to executive teams and boards of directors. Too often, the conversation is framed in technical terms, whereas boards are looking for a discussion centred on business risk, impact, and assessment, which is why we are helping our customers reframe cybersecurity as a strategic business issue rather than just a technical one," he said.
The survey also found that only 19 per cent of Malaysian organisations are highly confident in their ability to defend against AI-powered attacks, with 27 per cent stating that such threats are outpacing their detection capabilities and 20 per cent admitting they are unable to detect them at all.
Ransomware remains the most frequently encountered threat, reported by 64 per cent of Malaysian respondents. Other common risks include software supply chain attacks (54 per cent), insider threats (52 per cent), cloud vulnerabilities (46 per cent), and phishing (40 per cent).
The survey, conducted between February and April 2025, involved 550 IT and cybersecurity leaders from across the APAC region to assess organisational readiness in the face of escalating AI-enabled threats. - BERNAMA
More Like This

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
40 minutes ago
- The Star
Human coders are still better than AI, says this expert developer
Your team members may be tempted to rely on AI to help them write code for your company, either for cost or speed rationales or because they lack particular expertise. But you should be wary. — Pixabay In the complex 'will AI steal my job?' debate, software developers are among the workers most immediately at risk from powerful AI tools. It's certainly looking like the tech sector wants to reduce the number of humans working those jobs. Bold statements from the likes of Meta's Mark Zuckerberg and Anthropic's Dario Amodei support this since both of them say AI is already able to take over some code-writing roles. But a new blog post from a prominent coding expert strongly disputes their arguments, and supports some AI critics' position that AI really can't code. Salvatore Sanfilippo, an Italian developer who created Redis (an online database which calls itself the 'world's fastest data platform' and is beloved by coders building real-time apps), published a blog post this week, provocatively titled 'Human coders are still better than LLMs.' His title refers to large language model systems that power AI chatbots like OpenAI's ChatGPT and Anthropic's Claude. Sanfilippo said he's 'not anti-AI' and actually does 'use LLMs routinely,' and explained some specific interactions he'd had with Google's Gemini AI about writing code. These left him convinced that AIs are 'incredibly behind human intelligence,' so he wanted to make a point about it. The billions invested in the technology and the potential upending of the workforce mean it's 'impossible to have balanced conversations' on the matter, he wrote. Sanfilippo blogged that he was trying to 'fix a complicated bug' in Redis's systems. He made an attempt himself, and then asked Gemini, 'hey, what we can do here? Is there a super fast way' to implement his fix? Then, using detailed examples of the kind of software he was working with and the problem he was trying to fix, he blogged about the back-and-forth dialogue he had with Gemini as he tried to coax it toward an acceptable answer. After numerous interactions where the AI couldn't improve on his idea or really help much, he said he 'asked Gemini to do an analysis of (his last idea, and it was finally happy.' We can ignore the detailed code itself and just concentrate on Sanfilippo's final paragraph. 'All this to say: I just finished the analysis and stopped to write this blog post, I'm not sure if I'm going to use this system (but likely yes), but, the creativity of humans still have an edge, we are capable of really thinking out of the box, envisioning strange and imprecise solutions that can work better than others,' he wrote. 'This is something that is extremely hard for LLMs.' Gemini was useful, he admitted, to simply 'verify' his bug-fix ideas, but it couldn't outperform him and actually solve the problem itself. This stance from an expert coder goes up against some other pro-AI statements. Zuckerberg has said he plans to fire mid-level coders from Meta to save money, employing AI instead. In March, Amodei hit the headlines when he boldly predicted that all code would be written by AIs inside a year. Meanwhile, on the flip side, a February report from Microsoft warned that young coders coming out of college were already so reliant on AI to help them that they failed to understand the hard computer science behind the systems they were working on –something that may trip them up if they encountered a complex issue like Sanfilippo's bug. Commenters on a piece talking about Sanfilippo's blog post on coding news site Hacker News broadly agreed with his argument. One commenter likened the issue to a popular meme about social media: 'You know that saying that the best way to get an answer online is to post a wrong answer? That's what LLMs do for me.' Another writer noted that AIs were useful because even though they give pretty terrible coding advice, 'It still saves me time, because even 50 percent accuracy is still half that I don't have to write myself.' Lastly, another coder pointed out a very human benefit from using AI: 'I have ADHD and starting is the hardest part for me. With an LLM it gets me from 0 to 20% (or more) and I can nail it for the rest. It's way less stressful for me to start now.' Why should you care about this? At first glance, it looks like a very inside-baseball discussion about specific coding issues. You should care because your team members may be tempted to rely on AI to help them write code for your company, either for cost or speed rationales or because they lack particular expertise. But you should be wary. AIs are known to be unreliable, and Sanfilippo's argument, supported by other coders' comments, point out that AI really isn't capable of certain key coding tasks. For now, at least, coders' jobs may be safe… and if your team does use AI to code, they should double and triple check the AI's advice before implementing it in your IT system. – Inc./Tribune News Service


Malay Mail
3 hours ago
- Malay Mail
South Korea's president Lee seeks quick tariff resolution in first call with Trump
SEOUL, June 7 — US President Donald Trump and South Korea's new president Lee Jae-myung agreed to work toward a swift tariff deal in their first phone call since Lee was elected this week, Lee's office said yesterday. Trump has imposed tariffs on South Korea, a long time ally with which it has a bilateral free trade deal, and pressed it to pay more for the 28,500 US troops stationed there. Separately, Trump allies have aired concerns about Lee's more conciliatory stance towards China, Washington's main geopolitical rival. Lee, a liberal, was elected on June 3 after former conservative leader, Yoon Suk Yeol, was impeached and ousted. The future of South Korea's export-oriented economy may hinge on what kind of deal Lee can strike with Trump, with all of his country's key sectors from chips to autos and shipbuilding heavily exposed to global trade. His term began on Wednesday. 'The two presidents agreed to make an effort to reach a satisfactory agreement on tariff consultations as soon as possible that both countries can be satisfied with,' Lee's office said in a statement. 'To this end, they decided to encourage working-level negotiations to yield tangible results.' Trump invited Lee to a summit in the US and they plan to meet soon, according to a White House official. Analysts say the first opportunity for the two to meet could be at a G7 summit in Canada in mid-June. Lee's office said the two leaders also discussed the assassination attempts they both experienced last year as well as their enthusiasm for golf. Lee underwent surgery after he was stabbed in the neck by a man in January last year, while Trump was wounded in the ear by a bullet fired by a would-be assassin in July. South Korea, a major US ally and one of the first countries after Japan to engage with Washington on trade talks, agreed in late April to craft a 'July package' scrapping levies before the 90-day pause on Trump's reciprocal tariffs is lifted, but progress was disrupted by the change of governments in Seoul. Lee said on the eve of the elections that 'the most pressing matter is trade negotiations with the United States.' Lee's camp has said, however, that they intend to seek more time to negotiate on trade with Trump. While reiterating the importance of the US-South Korea alliance, Lee has also expressed more conciliatory plans for ties with China and North Korea, singling out the importance of China as a major trading partner while indicating a reluctance to take a firm stance on security tensions in the Taiwan Strait. Political analysts say that while Trump and Lee may share a desire to try to re-engage with North Korea, Lee's stance on China could cause friction with the US. A White House official said this week that South Korea's election was fair, but expressed concern about Chinese interference in what analysts said may have been a cautionary message to Lee. Speaking in Singapore last week, US Defense Secretary Pete Hegseth said many countries were tempted by the idea of seeking economic cooperation with China and defense cooperation with the United States, and warned that such entanglement complicated defense cooperation. — Reuters


New Straits Times
3 hours ago
- New Straits Times
Calling for ethical and responsible use of AI
LETTERS: In an era where artificial intelligence (AI) is rapidly shaping every facet of human life, it is critical that we ensure this powerful technology is developed and deployed with a human-centric approach. AI holds the potential to solve some of humanity's most pressing challenges, from healthcare innovations to environmental sustainability, but it must always serve the greater good. To humanise AI is to embed ethical considerations, transparency, and empathy into the heart of its design. AI is not just a tool; it reflects the values of those who create it. Therefore, AI development should prioritise fairness, accountability, and inclusivity. This means avoiding bias in decision-making systems, ensuring that AI enhances human potential rather than replacing it, and making its' benefits accessible to all, not just a select few. Governments, industries, and communities must work together to create a governance framework that fosters innovation while protecting privacy and rights. We must also emphasise the importance of educating our workforce and future generations to work alongside AI, harnessing its capabilities while maintaining our uniquely human traits of creativity, compassion, and critical thinking. As AI continues to transform the way we live, work, and interact, it is becoming increasingly urgent to ensure that its development and use are grounded in responsibility, accountability, and integrity. The Alliance for a Safe Community calls for clear, forward-looking regulations and a comprehensive ethical framework to govern AI usage to safeguard the public interest. AI technologies are rapidly being adopted across sectors — from healthcare and education to finance, law enforcement, and public services. While these advancements offer significant benefits, they also pose risks, including: • Invasion of privacy and misuse of personal data; • Algorithmic bias leading to discrimination or injustice; • Job displacement and economic inequality; • Deepfakes and misinformation Without proper regulation, AI could exacerbate existing societal challenges and even introduce new threats. There must be checks and balances to ensure that AI serves humanity and does not compromise safety, security, or fundamental rights. We propose the following elements as part of a robust regulatory framework: 1. AI Accountability Laws – Define legal responsibility for harm caused by AI systems, especially in high-risk applications. 2. Transparency and Explainability – Mandate that AI decisions affecting individuals (e.g., in hiring, credit scoring, or medical diagnoses) must be explainable and transparent. 3. Data Protection and Privacy Standards – Strengthen data governance frameworks to prevent unauthorised access, misuse, or exploitation of personal data by AI systems. 4. Risk Assessment and Certification – Require pre-deployment risk assessments and certification processes for high-impact AI tools. 5. Public Oversight Bodies – Establish independent agencies to oversee compliance, conduct audits, and respond to grievances involving AI. Technology alone cannot determine what is right or just. We must embed ethical principles into every stage of AI development and deployment. A Code of Ethics should include: • Human-Centric Design – AI must prioritise human dignity, autonomy, and well-being. • Non-Discrimination and Fairness – AI systems must not reinforce or amplify social, racial, gender, or economic bias. • Integrity and Honesty – Developers and users must avoid deceptive practices and be truthful about AI capabilities and limitations. • Environmental Responsibility – Developers should consider the energy and environmental impact of AI technologies. • Collaboration and Inclusivity – The development of AI standards must include voices from all segments of society, especially marginalised communities. AI is one of the most powerful tools of our time. Like any powerful tool, it must be handled with care, guided by laws, and shaped by ethical values. We urge policymakers, tech leaders, civil society, and global institutions to come together to build a framework that ensures AI is safe, inclusive, and used in the best interest of humanity. The future of AI should not be one where technology dictates the terms of our humanity. Instead, we must chart a course where AI amplifies our best qualities, helping us to live more fulfilling lives, build fairer societies, and safeguard the well-being of future generations. Only by humanising AI can we ensure that its promise is realised in a way that serves all of mankind.