logo
#

Latest news with #AndrewPhilp

Deepfake & AI crime surge poses major threat to business trust
Deepfake & AI crime surge poses major threat to business trust

Techday NZ

time10-07-2025

  • Business
  • Techday NZ

Deepfake & AI crime surge poses major threat to business trust

Trend Micro has released a new report outlining the increasing adoption of generative AI and deepfake technologies by cybercriminals, highlighting growing risks in business fraud, identity theft, and online extortion schemes. Deepfakes move to the mainstream The report indicates that deepfake-enabled cybercrime has developed in both scale and maturity, with generative AI tools now being used for various malicious purposes. These tools, originally intended to assist content creators, are now leveraged by cybercriminals to impersonate executives, circumvent established financial controls, and compromise internal human resource processes. According to the research, the accessibility and affordability of AI-generated media have significantly lowered the barrier to entry for cybercriminals. Off-the-shelf platforms producing video, audio, and image deepfakes are now readily obtainable, requiring little technical expertise on the part of users. As a result, digital trust and the reliability of traditional identity verification systems are being challenged by increasingly convincing synthetic media. Andrew Philp, ANZ Field CISO at Trend Micro, said, "AI-generated media is not just a future risk, it's a real business threat. We're seeing executives impersonated, hiring processes compromised, and financial safeguards bypassed with alarming ease. This research is a wake up call - if businesses are not proactively preparing for the deepfake era, they're already behind. In a world where seeing is no longer believing, digital trust must be rebuilt from the ground up." Research findings documented in the report confirm that attackers can now easily access guides, toolkits, and off-the-shelf solutions intended for content creation but re-purposed for cybercriminal activity. These resources are being actively traded within criminal communities, providing step-by-step techniques for bypassing onboarding checks and utilising plug-and-play deepfake solutions that make sophisticated attacks available to less-experienced perpetrators. Business risks and real-world impacts The application of deepfake technologies has been noted in several key areas highlighted by Trend Micro's report. In the financial sector, there has been a rise in attempts to use AI-generated media to bypass Know Your Customer (KYC) requirements, facilitating anonymous money laundering activities through the use of falsified identification documents and credentials. Businesses are also facing CEO fraud, where deepfake audio or video is used to impersonate senior executives. These attacks can take place in real time during meetings or conference calls, making detection increasingly difficult for unsuspecting teams. Such incidents expose companies to potential financial loss as well as reputational damage, as sensitive internal information may be compromised. The recruitment sector is another area singled out for concern. The report details cases where job applicants use deepfake video or audio to impersonate real candidates and pass interviews, thus gaining unauthorised access to confidential systems and data from within organisations. Growing and accessible cybercrime ecosystem The research describes a flourishing underground ecosystem built around the proliferation of these AI-powered tools. Tutorials and toolkits for crafting deepfakes and conducting scam operations are widely available, diminishing the need for technical proficiency. Face-swapping tools and voice clones are provided as services, making professional-grade results accessible with minimal investment. Trend Micro stresses that these trends represent an urgent need for businesses to adopt proactive measures. This includes updating authentication processes, educating employees about the risks of social engineering, and incorporating detection mechanisms for synthetic media into their cybersecurity strategies. Mitigation and staff awareness To address the evolving threat landscape, the report advocates for a comprehensive approach focused on minimising risk and protecting internal processes. Specific recommendations include regular staff training to recognise social engineering tactics, a review of existing security authentication protocols, and investment in technology capable of detecting deepfake media. Trend Micro's study serves as a reminder that as generative AI technology continues to advance and become more accessible, vigilance and adaptability will be critical components in maintaining organisational security and digital trust.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store