CEO caught on video admitting fraud - but it's a deepfake
William Petherbridge | Published 9 hours ago
Artificial intelligence has now made it possible to wake up to a video of your CEO seemingly admitting to fraud or receiving an urgent audio message from your CFO authorising a large, unexpected transaction, without any of it being real. Deepfakes aren't limited to criminal use cases targeting individuals or governments – they represent a sophisticated and escalating threat to corporations globally, including South Africa. Disinformation using deepfake technology
The use of deepfake technology has become one of the most powerful tools fueling disinformation. The rise in AI and machine learning embedded in commercially available tools such as generative adversarial networks (GANs) has levelled the playing field and increased the sophistication of deepfake content.
Cybercriminals, disgruntled insiders, competitors, and even state-sponsored groups can leverage deepfakes for devastating attacks, ranging from financial fraud and network compromise to severe reputational damage.
The threat itself, however, is not fake; it's manifesting tangibly within South Africa. The South African Banking Risk Information Centre (SABRIC) has issued stark warnings about the rise in AI-driven fraud scams, explicitly including deepfakes and voice cloning used to impersonate bank officials or lure victims into fake investment schemes, sometimes even using fabricated endorsements from prominent local figures.
With South Africa already identified by Interpol as a global cybercrime hotspot, estimating annual losses in the billions of Rands , the potential financial impact of sophisticated deepfake fraud targeting businesses is immense.
There are also implications for democracy as a whole. Accenture Africa recently highlighted how easily deepfakes could amplify misinformation and political unrest in a nation where false narratives can already spread rapidly online – a critical concern when it comes to elections.
Furthermore, the 'human firewall' – our employees – represents a significant area of vulnerability. Fortinet's 2024 Security Awareness and Training Global Research Report highlights that 46% of organisations now expect their employees to fall for more attacks in the future because bad actors are using AI.
Phishing emails used to be easier to identify because they were poorly worded and contained multiple spelling errors, but they led to successful breaches for decades. Now, they're drastically more difficult to identify as AI-generated emails and deep-fake media have reached levels of realism that leave almost no one immune.
Numerous types of malicious actors are most likely to target companies using deepfake technology.
Cybercriminals who have stolen samples of a victim's email, along with their address book, for example, may use GenAI to generate tailored content that matches the language, tone and topics in the victim's previous interactions to aid in spear phishing – convincing them to take action such as clicking on a malicious attachment.
Other cybercriminals use deepfakes to impersonate customers, business partners, or company executives to initiate and authorise fraudulent transactions. According to Deloitte's Center for Financial Services, GenAI-enabled fraud losses are growing at 32% year-over-year in the United States and could reach $40 billion by 2027.
Disgruntled current or former employees may also generate deepfakes to seek revenge or damage a company's reputation. By leveraging their inside knowledge, they can make the deepfakes appear especially credible.
Another potential deepfake danger may be from business partners, competitors or unscrupulous market speculators looking to gain leverage in negotiations or to try to affect a company's stock price through bad publicity.
Combating the deepfake threat requires more than just technological solutions; it demands a comprehensive, multi-layered strategy encompassing technology, processes, and people. Advanced threat detection: Organisations must invest in security solutions capable of detecting AI-manipulated media. AI itself plays a crucial role, powering tools that can analyse content for the subtle giveaways often present in deepfakes.
Robust authentication and processes: Implementing strong multi-factor authentication (MFA) remains paramount. Businesses should also review and strengthen processes around sensitive actions like financial transactions or data access requests, incorporating verification steps that cannot be easily spoofed by a deepfake voice or video call. A Zero Trust approach, verifying everything and assuming breaches when in doubt, is essential.
Empowering the human firewall: Continuous education and awareness training are vital. Employees need to be equipped with the knowledge to recognise potential deepfake indicators and understand the procedures for verifying communications, especially those involving sensitive instructions or financial implications.
Reputation management: Proactive reputation management and clear communication channels become even more critical. Being able to swiftly debunk a deepfake attack targeting the company or its leadership can mitigate significant damage.
Staying informed and advocating: Cybersecurity teams must stay abreast of evolving deepfake tactics. Collaboration and information sharing within industries and engagement with bodies working on updating South Africa's cyber laws (such as aspects of POPIA) to specifically address deepfake crimes are important. Preparing for the inevitable
Deepfakes are not a future problem; they are a clear and present danger to South African businesses. They target the very accuracy of the information we rely on as consumers, employees and investors.
The question is no longer if a South African organisation will be targeted by a deepfake attack, but how prepared it will be when it happens. Proactive investment in robust security measures, stringent processes, and comprehensive employee education is not just advisable – it's essential for survival in this new era of digital deception.
William Petherbridge, Systems Engineering Manager at Fortinet
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

The Star
2 days ago
- The Star
CEO caught on video admitting fraud - but it's a deepfake
William Petherbridge | Published 9 hours ago Artificial intelligence has now made it possible to wake up to a video of your CEO seemingly admitting to fraud or receiving an urgent audio message from your CFO authorising a large, unexpected transaction, without any of it being real. Deepfakes aren't limited to criminal use cases targeting individuals or governments – they represent a sophisticated and escalating threat to corporations globally, including South Africa. Disinformation using deepfake technology The use of deepfake technology has become one of the most powerful tools fueling disinformation. The rise in AI and machine learning embedded in commercially available tools such as generative adversarial networks (GANs) has levelled the playing field and increased the sophistication of deepfake content. Cybercriminals, disgruntled insiders, competitors, and even state-sponsored groups can leverage deepfakes for devastating attacks, ranging from financial fraud and network compromise to severe reputational damage. The threat itself, however, is not fake; it's manifesting tangibly within South Africa. The South African Banking Risk Information Centre (SABRIC) has issued stark warnings about the rise in AI-driven fraud scams, explicitly including deepfakes and voice cloning used to impersonate bank officials or lure victims into fake investment schemes, sometimes even using fabricated endorsements from prominent local figures. With South Africa already identified by Interpol as a global cybercrime hotspot, estimating annual losses in the billions of Rands , the potential financial impact of sophisticated deepfake fraud targeting businesses is immense. There are also implications for democracy as a whole. Accenture Africa recently highlighted how easily deepfakes could amplify misinformation and political unrest in a nation where false narratives can already spread rapidly online – a critical concern when it comes to elections. Furthermore, the 'human firewall' – our employees – represents a significant area of vulnerability. Fortinet's 2024 Security Awareness and Training Global Research Report highlights that 46% of organisations now expect their employees to fall for more attacks in the future because bad actors are using AI. Phishing emails used to be easier to identify because they were poorly worded and contained multiple spelling errors, but they led to successful breaches for decades. Now, they're drastically more difficult to identify as AI-generated emails and deep-fake media have reached levels of realism that leave almost no one immune. Numerous types of malicious actors are most likely to target companies using deepfake technology. Cybercriminals who have stolen samples of a victim's email, along with their address book, for example, may use GenAI to generate tailored content that matches the language, tone and topics in the victim's previous interactions to aid in spear phishing – convincing them to take action such as clicking on a malicious attachment. Other cybercriminals use deepfakes to impersonate customers, business partners, or company executives to initiate and authorise fraudulent transactions. According to Deloitte's Center for Financial Services, GenAI-enabled fraud losses are growing at 32% year-over-year in the United States and could reach $40 billion by 2027. Disgruntled current or former employees may also generate deepfakes to seek revenge or damage a company's reputation. By leveraging their inside knowledge, they can make the deepfakes appear especially credible. Another potential deepfake danger may be from business partners, competitors or unscrupulous market speculators looking to gain leverage in negotiations or to try to affect a company's stock price through bad publicity. Combating the deepfake threat requires more than just technological solutions; it demands a comprehensive, multi-layered strategy encompassing technology, processes, and people. Advanced threat detection: Organisations must invest in security solutions capable of detecting AI-manipulated media. AI itself plays a crucial role, powering tools that can analyse content for the subtle giveaways often present in deepfakes. Robust authentication and processes: Implementing strong multi-factor authentication (MFA) remains paramount. Businesses should also review and strengthen processes around sensitive actions like financial transactions or data access requests, incorporating verification steps that cannot be easily spoofed by a deepfake voice or video call. A Zero Trust approach, verifying everything and assuming breaches when in doubt, is essential. Empowering the human firewall: Continuous education and awareness training are vital. Employees need to be equipped with the knowledge to recognise potential deepfake indicators and understand the procedures for verifying communications, especially those involving sensitive instructions or financial implications. Reputation management: Proactive reputation management and clear communication channels become even more critical. Being able to swiftly debunk a deepfake attack targeting the company or its leadership can mitigate significant damage. Staying informed and advocating: Cybersecurity teams must stay abreast of evolving deepfake tactics. Collaboration and information sharing within industries and engagement with bodies working on updating South Africa's cyber laws (such as aspects of POPIA) to specifically address deepfake crimes are important. Preparing for the inevitable Deepfakes are not a future problem; they are a clear and present danger to South African businesses. They target the very accuracy of the information we rely on as consumers, employees and investors. The question is no longer if a South African organisation will be targeted by a deepfake attack, but how prepared it will be when it happens. Proactive investment in robust security measures, stringent processes, and comprehensive employee education is not just advisable – it's essential for survival in this new era of digital deception. William Petherbridge, Systems Engineering Manager at Fortinet

IOL News
3 days ago
- IOL News
CEO caught on video admitting fraud - but it's a deepfake
The use of deepfake technology has become one of the most powerful tools fueling disinformation, says the writer. Image: Supplied Artificial intelligence has now made it possible to wake up to a video of your CEO seemingly admitting to fraud or receiving an urgent audio message from your CFO authorising a large, unexpected transaction, without any of it being real. Deepfakes aren't limited to criminal use cases targeting individuals or governments – they represent a sophisticated and escalating threat to corporations globally, including South Africa. Disinformation using deepfake technology The use of deepfake technology has become one of the most powerful tools fueling disinformation. The rise in AI and machine learning embedded in commercially available tools such as generative adversarial networks (GANs) has levelled the playing field and increased the sophistication of deepfake content. Cybercriminals, disgruntled insiders, competitors, and even state-sponsored groups can leverage deepfakes for devastating attacks, ranging from financial fraud and network compromise to severe reputational damage. The South African reality: A threat amplified The threat itself, however, is not fake; it's manifesting tangibly within South Africa. The South African Banking Risk Information Centre (SABRIC) has issued stark warnings about the rise in AI-driven fraud scams, explicitly including deepfakes and voice cloning used to impersonate bank officials or lure victims into fake investment schemes, sometimes even using fabricated endorsements from prominent local figures. With South Africa already identified by Interpol as a global cybercrime hotspot, estimating annual losses in the billions of Rands, the potential financial impact of sophisticated deepfake fraud targeting businesses is immense. There are also implications for democracy as a whole. Accenture Africa recently highlighted how easily deepfakes could amplify misinformation and political unrest in a nation where false narratives can already spread rapidly online – a critical concern when it comes to elections. Furthermore, the 'human firewall' – our employees – represents a significant area of vulnerability. Fortinet's 2024 Security Awareness and Training Global Research Report highlights that 46% of organisations now expect their employees to fall for more attacks in the future because bad actors are using AI. Phishing emails used to be easier to identify because they were poorly worded and contained multiple spelling errors, but they led to successful breaches for decades. Now, they're drastically more difficult to identify as AI-generated emails and deep-fake media have reached levels of realism that leave almost no one immune. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Next Stay Close ✕ William Petherbridge, Systems Engineering Manager at Fortinet Image: Supplied Who targets companies using deepfakes? Numerous types of malicious actors are most likely to target companies using deepfake technology. Cybercriminals who have stolen samples of a victim's email, along with their address book, for example, may use GenAI to generate tailored content that matches the language, tone and topics in the victim's previous interactions to aid in spear phishing – convincing them to take action such as clicking on a malicious attachment. Other cybercriminals use deepfakes to impersonate customers, business partners, or company executives to initiate and authorise fraudulent transactions. According to Deloitte's Center for Financial Services, GenAI-enabled fraud losses are growing at 32% year-over-year in the United States and could reach $40 billion by 2027. Disgruntled current or former employees may also generate deepfakes to seek revenge or damage a company's reputation. By leveraging their inside knowledge, they can make the deepfakes appear especially credible. Another potential deepfake danger may be from business partners, competitors or unscrupulous market speculators looking to gain leverage in negotiations or to try to affect a company's stock price through bad publicity.

IOL News
3 days ago
- IOL News
Concerns over government spending on outdated driving licence printing machine
Government continues to spend millions repairing and maintaining its only driving licence card printing machine. Image: SUPPLIED The Organisation Undoing Tax Abuse (Outa) said it is concerned at how the government continues to spend millions of Rands maintaining its only driving licence card printing machine and the costs to catch up with backlogs. The organisation said this is concerning because the government has been talking obtaining about a new licence card machine for the past 10 years. This was after Transport Minister, Barbara Creecy, revealed that the machine had been out of service for 38 days since April 1, resulting in a backlog of 733,000 licence cards. Creecy disclosed this in response to a written parliamentary question from Rise Mzansi leader, Songezo Zibi. Zibi asked how many times the machine had broken down in the past three financial years, including since the start of the 2025/26 financial year. He also wanted to know how much had been spent on repairs during this period and how much overtime had been paid to employees due to lost printing time as a result of the breakdowns. Creecy revealed that the machine was broken for 26 days in the 2022/23 financial year. In 2023/24 it was broken for 48 days. It was also broken for 17 days in 2024/25. The department paid R9,267,862 for the repair and maintenance in 2022/23. It paid R1,651,772 in 2023/24, R544,747 in 2024/25, and R624,988 so far, totaling R12,089,370,64. The department also had to fork out R4.4 million for overtime payments between the 2022/23 and 2024/25 financial years. Creecy said it takes four people to operate the machine. She said she has directed that a declaratory order be sought from a competent court on the tender to acquire a new machine, in order to ensure that no further irregular expenditure occurs. In March, Creecy announced that she had instructed her department to lodge a High Court application for a declaratory order regarding the licence machine tender -awarded to Idemia and Security South Africa. Her decision was influenced by the findings from the Auditor-General (AG) report, which identified instances of non-compliance with the required procurement procedures. Outa's chief executive officer, Wayne Duvenage, said the organisation was also concerned by the length of time it takes for the department to get the court to nullify the contract - riddled with irregularities and potential corruption. 'Why does it take so long to get this process done?' he asked. 'Lots of money spent on maintaining and overtime costs to catch up with backlogs. This is very concerning. The government has been talking about a new driving licence card machine for about 10 years now, with multiple tenders awarded and cancelled for the past 5 years and still, we are nowhere near resolving this issue. Incompetence and political interference at its best,' Duvenage said. Department of Transport spokesperson, Collen Msibi did not respond to a request for comment.