logo
#

Latest news with #RobWoods

Deepfakes pose growing threat to GCC as AI technology advances: Experts
Deepfakes pose growing threat to GCC as AI technology advances: Experts

Al Arabiya

time23-05-2025

  • Al Arabiya

Deepfakes pose growing threat to GCC as AI technology advances: Experts

Security professionals are raising alarm over the rising sophistication of deepfake technology and its potential impact on countries across the GCC, as artificial intelligence tools become more accessible and harder to detect. Deepfakes - highly convincing digital forgeries created using artificial intelligence to manipulate audio, images and video of real people - have evolved rapidly in recent years, creating significant challenges for organizations and individuals alike, according to cybersecurity professionals speaking to Al Arabiya English. Evolution of deepfake technology 'Deepfake technology has advanced at such a striking pace in recent years, largely due to the breakthroughs in generative [Artificial Intelligence (AI)] and machine learning,' David Higgins, senior director at the field technology office of CyberArk, told Al Arabiya English. 'Once seen as a tool for experimentation, it has now matured into a powerful means of producing audio, video, and image content that is almost impossible to distinguish from authentic material.' The technology's rapid advancement has transformed what was once a novelty into a potential threat to businesses, government institutions, and individuals across the region. 'Today, deepfakes are no longer limited to simple face swaps or video alterations. They now encompass complex manipulations like lip-syncing, where AI can literally put words into someone's mouth, as well as full-body puppeteering,' said Higgins. Rob Woods, director of fraud and identity at LexisNexis Risk Solutions, also told Al Arabiya English how the quality of such manipulations has improved dramatically in recent years. 'Deepfakes, such as video footage overlaying one face onto another for impersonation, used to show visible flaws like blurring around ears or nostrils and stuttering image quality,' Woods said. 'Advanced deepfakes now adapt realistically to lighting changes and use real-time voice manipulation. We have reached a point where even human experts may struggle to identify deepfakes by sight alone.' Immediate threats to society The concerns around deepfake technology are particularly relevant in the GCC region, where digital transformation initiatives are rapidly expanding, the experts said. 'The most pressing threat posed by deepfakes is their potential to create an upsurge in fraud while weakening societal trust and confidence in digital media – which is concerning given that digital platforms dominate communications and media around the world,' Higgins said. The technology has made sophisticated tools accessible to malicious actors looking to manipulate public perception or conduct fraud, creating significant security concerns for businesses in the region. 'In Saudi Arabia, where AI adoption is advancing rapidly under Vision 2030, there is growing concern among businesses too. Over half of organizations surveyed in CyberArk's 2025 Identity Security Report cite data privacy and AI agent security as top challenges to safe deployment,' Higgins added. Woods echoed these concerns, highlighting the democratization of deepfake technology as a particular challenge. 'The most immediate threat to society is that high-quality deepfake technology is widely available, enabling fraudsters and organized crime groups to enhance their scam tactics. Fraudsters can cheaply improve the sophistication of their attempts, making them look more legitimate and increasing the likelihood of success,' he said. Rising concerns in Saudi Arabia Recent reports suggest growing anxiety in Saudi Arabia specifically regarding deepfake threats. According to CyberArk's research cited by Higgins, the Kingdom is experiencing increased unease around manipulated AI systems. 'In recent months, concerns across the Middle East have intensified, particularly in countries like Saudi Arabia, where organizations report growing unease around the manipulation of AI agents and synthetic media,' Higgins said. He cited specific data points highlighting this trend: 'According to CyberArk's 2025 Identity Security Landscape report, 52 percent of organizations surveyed in Saudi Arabia now consider misconfigured or manipulated AI behavior, internally or externally, a top security concern.' Vulnerable sectors beyond politics While political manipulation often dominates discussions about deepfakes, experts emphasized that virtually every sector faces potential threats from this technology. 'There are very few sectors that can safely say they are protected from potential deepfake manipulation. Industries such as finance, healthcare, and corporate enterprises are all at risk of being targeted,' Higgins warned. He detailed how different sectors face unique vulnerabilities: 'When looking at the financial sector, deepfakes are being used to impersonate executives, leading to fraudulent transactions or insider trading. Healthcare institutions may face risks if deepfakes are used to manipulate medical records or impersonate medical professionals, potentially compromising patient care.' The financial services sector appears particularly vulnerable in the GCC region, according to Woods. 'Financial services, including banking, digital wallets and lending, rely on verifying customer identities, making them prime targets for fraudsters,' he said. 'With diverse financial economies such as in the Middle East encouraging competition among digital banks and super apps, customer acquisition has become critical for balancing customer experience and risk management.' Detection capabilities: A technological race As deepfake technology continues to advance, detection methods are struggling to keep pace, creating a technological race between security systems and those seeking to exploit them. 'Detection technologies designed to combat deepfakes are advancing, but they are in a constant race against a threat that is always evolving,' Higgins said. 'As generative AI tools become more accessible and powerful, deepfakes are growing in realism and scale.' He highlighted the limitations of current detection systems: 'While there are detection systems capable of detecting subtle inconsistencies in voice patterns, facial movements, and metadata, malicious actors continue to find ways to outpace them.' Woods added: 'Organizations are just beginning to tackle the challenge of deepfakes and it is a race they must win. Countering AI-generated fraud, including deepfakes, demands AI-driven solutions capable of distinguishing real humans from deepfakes.' Social media platforms' responsibility The role of social media companies in addressing deepfake content remains a contentious issue, with experts calling for more robust measures to identify and limit the spread of malicious synthetic media. 'Social media platforms carry a critical responsibility in curbing the spread of malicious deepfakes. As the primary channels through which billions consume information globally, they are also the frontline where manipulated content is increasingly gaining traction,' Higgins said. He acknowledged some progress while highlighting ongoing challenges: 'Some tech giants, including Meta, Google, and Microsoft, have begun introducing measures to label AI-generated content clearly – which are steps in the right direction. However, inconsistencies remain.' Higgins pointed to specific platforms that may be exacerbating the problem: 'X (formerly Twitter) dismantled many of its verification safeguards in 2023, a move that has made public figures more vulnerable to impersonation and misinformation. This highlights a deeper issue: disinformation and sensationalism have, for some platforms, become embedded in their engagement-driven business models.' Woods believes that while social media platforms are not responsible for the rise of deepfakes or malicious AI, irrespective of fraudsters' methods. However, these platforms can play a part in the solution, he said, adding collaboration through data-sharing initiatives between financial services, telecommunications and social media companies can significantly improve fraud prevention efforts.' Public readiness and education A particularly concerning aspect of the deepfake threat is the general public's limited ability to identify manipulated content, according to the experts. 'As the use of deepfakes spreads, the average internet user remains alarmingly unprepared to identify manipulated content,' Higgins said. 'Where synthetic media is becoming more and more realistic, simply trusting what we see or hear online is no longer an option.' He advocated for a fundamental shift in how people approach digital content: 'Adopting a zero-trust mindset is key, and people must become accustomed to treating digital content with the same caution applied to suspicious emails or phishing scams.' Woods agreed with this assessment, noting the difficulty even professionals face in identifying sophisticated deepfakes. 'Identifying deepfakes with the naked eye is challenging, even for trained professionals. People should be aware that deepfake technology is advancing quickly and not underestimate the tactics and tools available to fraudsters,' he said. Practical advice for protection Both experts offered practical guidance for individuals to protect themselves against deepfake-related scams, which often target emotional vulnerabilities. 'One common scenario involves fraudsters using deepfakes to imitate a distressed relative, claiming to need urgent financial help due to a lost phone or another emergency,' Woods explained. He recommended several protective steps: 'Approach unexpected and urgent requests for money or personal information online with caution, even if they appear to come from a loved one or trusted source. Pause and consider whether it could be a scam. Verify the identity of the person by reaching out to them through a different method than the one they used to contact you.' Higgins also emphasized the importance of education in combating the threat: 'Citizens must be encouraged to verify sources, limit public sharing of personal media, and critically assess the credibility of online content. Platforms, regulators, and educational institutions all have a role to play in equipping users with the tools and knowledge to navigate a digital landscape where not everything is as it seems.' Regulatory frameworks The experts agreed that regulatory frameworks addressing deepfake technology remain underdeveloped globally, despite the growing threat. 'The legal frameworks around deepfakes vary greatly across geographies and jurisdictions, sometimes creating a grey area between unethical manipulation and criminal activity,' Higgins pointed out. 'In Saudi Arabia, where laws around cybercrime are among the strictest in the Middle East, impersonation, defamation, and fraud through deepfakes may fall under existing regulations.' Woods was more direct in his assessment of the current regulatory landscape: 'No global regulator has yet implemented a legal deterrent or regulatory framework to address the threat of deepfakes.' Despite the serious nature of deepfake threats, the experts cautioned against complete alarmism, noting legitimate applications for the technology alongside its potential for harm. 'Not all deepfakes are bad and they do have a place in society, for example providing entertainment, gaming and augmented reality,' Woods said. 'However, as with any technological advancement, some people exploit these tools for malicious purposes.' Higgins warned against dismissing the threat as overblown: 'Dismissing deepfakes as exaggerated or irrelevant underestimates one of the most disruptive threats faced today. While deepfake content may once have been a novelty, it has rapidly evolved into a tool capable of serious harm—targeting not just individuals or brands, but the very concept of truth.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store