logo
#

Latest news with #WilliamPetherbridge

CEO caught on video admitting fraud - but it's a deepfake
CEO caught on video admitting fraud - but it's a deepfake

The Star

time4 days ago

  • Business
  • The Star

CEO caught on video admitting fraud - but it's a deepfake

William Petherbridge | Published 9 hours ago Artificial intelligence has now made it possible to wake up to a video of your CEO seemingly admitting to fraud or receiving an urgent audio message from your CFO authorising a large, unexpected transaction, without any of it being real. Deepfakes aren't limited to criminal use cases targeting individuals or governments – they represent a sophisticated and escalating threat to corporations globally, including South Africa. Disinformation using deepfake technology The use of deepfake technology has become one of the most powerful tools fueling disinformation. The rise in AI and machine learning embedded in commercially available tools such as generative adversarial networks (GANs) has levelled the playing field and increased the sophistication of deepfake content. Cybercriminals, disgruntled insiders, competitors, and even state-sponsored groups can leverage deepfakes for devastating attacks, ranging from financial fraud and network compromise to severe reputational damage. The threat itself, however, is not fake; it's manifesting tangibly within South Africa. The South African Banking Risk Information Centre (SABRIC) has issued stark warnings about the rise in AI-driven fraud scams, explicitly including deepfakes and voice cloning used to impersonate bank officials or lure victims into fake investment schemes, sometimes even using fabricated endorsements from prominent local figures. With South Africa already identified by Interpol as a global cybercrime hotspot, estimating annual losses in the billions of Rands , the potential financial impact of sophisticated deepfake fraud targeting businesses is immense. There are also implications for democracy as a whole. Accenture Africa recently highlighted how easily deepfakes could amplify misinformation and political unrest in a nation where false narratives can already spread rapidly online – a critical concern when it comes to elections. Furthermore, the 'human firewall' – our employees – represents a significant area of vulnerability. Fortinet's 2024 Security Awareness and Training Global Research Report highlights that 46% of organisations now expect their employees to fall for more attacks in the future because bad actors are using AI. Phishing emails used to be easier to identify because they were poorly worded and contained multiple spelling errors, but they led to successful breaches for decades. Now, they're drastically more difficult to identify as AI-generated emails and deep-fake media have reached levels of realism that leave almost no one immune. Numerous types of malicious actors are most likely to target companies using deepfake technology. Cybercriminals who have stolen samples of a victim's email, along with their address book, for example, may use GenAI to generate tailored content that matches the language, tone and topics in the victim's previous interactions to aid in spear phishing – convincing them to take action such as clicking on a malicious attachment. Other cybercriminals use deepfakes to impersonate customers, business partners, or company executives to initiate and authorise fraudulent transactions. According to Deloitte's Center for Financial Services, GenAI-enabled fraud losses are growing at 32% year-over-year in the United States and could reach $40 billion by 2027. Disgruntled current or former employees may also generate deepfakes to seek revenge or damage a company's reputation. By leveraging their inside knowledge, they can make the deepfakes appear especially credible. Another potential deepfake danger may be from business partners, competitors or unscrupulous market speculators looking to gain leverage in negotiations or to try to affect a company's stock price through bad publicity. Combating the deepfake threat requires more than just technological solutions; it demands a comprehensive, multi-layered strategy encompassing technology, processes, and people. Advanced threat detection: Organisations must invest in security solutions capable of detecting AI-manipulated media. AI itself plays a crucial role, powering tools that can analyse content for the subtle giveaways often present in deepfakes. Robust authentication and processes: Implementing strong multi-factor authentication (MFA) remains paramount. Businesses should also review and strengthen processes around sensitive actions like financial transactions or data access requests, incorporating verification steps that cannot be easily spoofed by a deepfake voice or video call. A Zero Trust approach, verifying everything and assuming breaches when in doubt, is essential. Empowering the human firewall: Continuous education and awareness training are vital. Employees need to be equipped with the knowledge to recognise potential deepfake indicators and understand the procedures for verifying communications, especially those involving sensitive instructions or financial implications. Reputation management: Proactive reputation management and clear communication channels become even more critical. Being able to swiftly debunk a deepfake attack targeting the company or its leadership can mitigate significant damage. Staying informed and advocating: Cybersecurity teams must stay abreast of evolving deepfake tactics. Collaboration and information sharing within industries and engagement with bodies working on updating South Africa's cyber laws (such as aspects of POPIA) to specifically address deepfake crimes are important. Preparing for the inevitable Deepfakes are not a future problem; they are a clear and present danger to South African businesses. They target the very accuracy of the information we rely on as consumers, employees and investors. The question is no longer if a South African organisation will be targeted by a deepfake attack, but how prepared it will be when it happens. Proactive investment in robust security measures, stringent processes, and comprehensive employee education is not just advisable – it's essential for survival in this new era of digital deception. William Petherbridge, Systems Engineering Manager at Fortinet

Safeguarding corporate networks: Are your employees putting you at risk?
Safeguarding corporate networks: Are your employees putting you at risk?

The Citizen

time09-05-2025

  • Business
  • The Citizen

Safeguarding corporate networks: Are your employees putting you at risk?

While the tactics for gaining access vary, phishing emails remain the number one method for hackers. As businesses rapidly adopt digital tools, remote work, and cloud services, cybercriminals are increasingly targeting employees. They view employee login details as valuable, so protecting them is crucial to maintaining the security of company systems. Using the same login details across different tools makes things easier and more efficient for employees. However, it also creates a significant risk — if those credentials are stolen or used on a hacked system, the entire company could be at risk. William Petherbridge, manager of systems engineering at Fortinet explains how phishing remains the number one method hackers use to steal credentials and offers practical advice for organisations to combat this. The allure of employee credentials He said that for cybercriminals, the motivation behind stealing an employee's credentials is to infiltrate the corporate network. Targets can range from C-suite executives to junior staff who may not realise their identities have been compromised. Once inside the network, criminal activities range from stealing sensitive company data for industrial espionage to locking down systems for ransom. While the tactics for gaining access vary, phishing emails remain a popular choice for hackers, the number one method is still email. 'Phishing attacks are still effective in tricking employees into logging into fake accounts to steal their credentials. When an email appears to come from a senior individual within an organisation with specific instructions, employees tend to act quickly. That's why awareness is critical.' ALSO READ: Cybersecurity breach costs Astral R20 million in profit Identity threat detection and response in cybersecurity Petherbridge added that although most large corporate entities have security operations centres or outsource them, the challenge is the sheer volume of alerts received. 'Security teams receive thousands of alerts, making it impossible to review manually and take action on all of them. That's where automation and detection response systems come into play. Having tools that can automate and make sense of that data is essential.' Identity threat detection and response (ITDR) is both a reactive tool and a proactive defence mechanism allowing businesses to monitor user behaviours and prevent breaches before they can fully unfold. What steps can organisations take? He advises that companies start combating identity theft with a multi-layered approach. 'On the preventative side, strong passwords are a basic requirement, together with multi-factor authentication. Beyond that, privileged access management (PAM) and identity and access management (IAM) systems help define the role of each user and what they're allowed to access. 'On the detection end of the equation, enterprise-level organisations need the ability to analyse identity behaviour, including anomalies in login patterns or unusual activity and immediately respond if something suspicious is taking place.' NOW READ: WhatsApp account hacked? This is what you need and must do

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store