9 hours ago
CEO caught on video admitting fraud - but it's a deepfake
The use of deepfake technology has become one of the most powerful tools fueling disinformation, says the writer.
Image: Supplied
Artificial intelligence has now made it possible to wake up to a video of your CEO seemingly admitting to fraud or receiving an urgent audio message from your CFO authorising a large, unexpected transaction, without any of it being real. Deepfakes aren't limited to criminal use cases targeting individuals or governments – they represent a sophisticated and escalating threat to corporations globally, including South Africa. Disinformation using deepfake technology
The use of deepfake technology has become one of the most powerful tools fueling disinformation. The rise in AI and machine learning embedded in commercially available tools such as generative adversarial networks (GANs) has levelled the playing field and increased the sophistication of deepfake content.
Cybercriminals, disgruntled insiders, competitors, and even state-sponsored groups can leverage deepfakes for devastating attacks, ranging from financial fraud and network compromise to severe reputational damage.
The South African reality: A threat amplified
The threat itself, however, is not fake; it's manifesting tangibly within South Africa. The South African Banking Risk Information Centre (SABRIC) has issued stark warnings about the rise in AI-driven fraud scams, explicitly including deepfakes and voice cloning used to impersonate bank officials or lure victims into fake investment schemes, sometimes even using fabricated endorsements from prominent local figures.
With South Africa already identified by Interpol as a global cybercrime hotspot, estimating annual losses in the billions of Rands, the potential financial impact of sophisticated deepfake fraud targeting businesses is immense.
There are also implications for democracy as a whole. Accenture Africa recently highlighted how easily deepfakes could amplify misinformation and political unrest in a nation where false narratives can already spread rapidly online – a critical concern when it comes to elections.
Furthermore, the 'human firewall' – our employees – represents a significant area of vulnerability. Fortinet's 2024 Security Awareness and Training Global Research Report highlights that 46% of organisations now expect their employees to fall for more attacks in the future because bad actors are using AI.
Phishing emails used to be easier to identify because they were poorly worded and contained multiple spelling errors, but they led to successful breaches for decades. Now, they're drastically more difficult to identify as AI-generated emails and deep-fake media have reached levels of realism that leave almost no one immune.
Video Player is loading.
Play Video
Play
Unmute
Current Time
0:00
/
Duration
-:-
Loaded :
0%
Stream Type LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
0:00
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text Color White Black Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Transparent Semi-Transparent Opaque
Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps
Reset
restore all settings to the default values Done
Close Modal Dialog
End of dialog window.
Advertisement
Video Player is loading.
Play Video
Play
Unmute
Current Time
0:00
/
Duration
-:-
Loaded :
0%
Stream Type LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
0:00
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text Color White Black Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Transparent Semi-Transparent Opaque
Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps
Reset
restore all settings to the default values Done
Close Modal Dialog
End of dialog window.
Next
Stay
Close ✕
William Petherbridge, Systems Engineering Manager at Fortinet
Image: Supplied
Who targets companies using deepfakes?
Numerous types of malicious actors are most likely to target companies using deepfake technology.
Cybercriminals who have stolen samples of a victim's email, along with their address book, for example, may use GenAI to generate tailored content that matches the language, tone and topics in the victim's previous interactions to aid in spear phishing – convincing them to take action such as clicking on a malicious attachment.
Other cybercriminals use deepfakes to impersonate customers, business partners, or company executives to initiate and authorise fraudulent transactions. According to Deloitte's Center for Financial Services, GenAI-enabled fraud losses are growing at 32% year-over-year in the United States and could reach $40 billion by 2027.
Disgruntled current or former employees may also generate deepfakes to seek revenge or damage a company's reputation. By leveraging their inside knowledge, they can make the deepfakes appear especially credible.
Another potential deepfake danger may be from business partners, competitors or unscrupulous market speculators looking to gain leverage in negotiations or to try to affect a company's stock price through bad publicity.