logo
Deepfakes pose growing threat to GCC as AI technology advances: Experts

Deepfakes pose growing threat to GCC as AI technology advances: Experts

Al Arabiya23-05-2025

Security professionals are raising alarm over the rising sophistication of deepfake technology and its potential impact on countries across the GCC, as artificial intelligence tools become more accessible and harder to detect.
Deepfakes - highly convincing digital forgeries created using artificial intelligence to manipulate audio, images and video of real people - have evolved rapidly in recent years, creating significant challenges for organizations and individuals alike, according to cybersecurity professionals speaking to Al Arabiya English.
Evolution of deepfake technology
'Deepfake technology has advanced at such a striking pace in recent years, largely due to the breakthroughs in generative [Artificial Intelligence (AI)] and machine learning,' David Higgins, senior director at the field technology office of CyberArk, told Al Arabiya English. 'Once seen as a tool for experimentation, it has now matured into a powerful means of producing audio, video, and image content that is almost impossible to distinguish from authentic material.'
The technology's rapid advancement has transformed what was once a novelty into a potential threat to businesses, government institutions, and individuals across the region.
'Today, deepfakes are no longer limited to simple face swaps or video alterations. They now encompass complex manipulations like lip-syncing, where AI can literally put words into someone's mouth, as well as full-body puppeteering,' said Higgins.
Rob Woods, director of fraud and identity at LexisNexis Risk Solutions, also told Al Arabiya English how the quality of such manipulations has improved dramatically in recent years.
'Deepfakes, such as video footage overlaying one face onto another for impersonation, used to show visible flaws like blurring around ears or nostrils and stuttering image quality,' Woods said. 'Advanced deepfakes now adapt realistically to lighting changes and use real-time voice manipulation. We have reached a point where even human experts may struggle to identify deepfakes by sight alone.'
Immediate threats to society
The concerns around deepfake technology are particularly relevant in the GCC region, where digital transformation initiatives are rapidly expanding, the experts said.
'The most pressing threat posed by deepfakes is their potential to create an upsurge in fraud while weakening societal trust and confidence in digital media – which is concerning given that digital platforms dominate communications and media around the world,' Higgins said.
The technology has made sophisticated tools accessible to malicious actors looking to manipulate public perception or conduct fraud, creating significant security concerns for businesses in the region.
'In Saudi Arabia, where AI adoption is advancing rapidly under Vision 2030, there is growing concern among businesses too. Over half of organizations surveyed in CyberArk's 2025 Identity Security Report cite data privacy and AI agent security as top challenges to safe deployment,' Higgins added.
Woods echoed these concerns, highlighting the democratization of deepfake technology as a particular challenge.
'The most immediate threat to society is that high-quality deepfake technology is widely available, enabling fraudsters and organized crime groups to enhance their scam tactics. Fraudsters can cheaply improve the sophistication of their attempts, making them look more legitimate and increasing the likelihood of success,' he said.
Rising concerns in Saudi Arabia
Recent reports suggest growing anxiety in Saudi Arabia specifically regarding deepfake threats. According to CyberArk's research cited by Higgins, the Kingdom is experiencing increased unease around manipulated AI systems.
'In recent months, concerns across the Middle East have intensified, particularly in countries like Saudi Arabia, where organizations report growing unease around the manipulation of AI agents and synthetic media,' Higgins said.
He cited specific data points highlighting this trend: 'According to CyberArk's 2025 Identity Security Landscape report, 52 percent of organizations surveyed in Saudi Arabia now consider misconfigured or manipulated AI behavior, internally or externally, a top security concern.'
Vulnerable sectors beyond politics
While political manipulation often dominates discussions about deepfakes, experts emphasized that virtually every sector faces potential threats from this technology.
'There are very few sectors that can safely say they are protected from potential deepfake manipulation. Industries such as finance, healthcare, and corporate enterprises are all at risk of being targeted,' Higgins warned.
He detailed how different sectors face unique vulnerabilities: 'When looking at the financial sector, deepfakes are being used to impersonate executives, leading to fraudulent transactions or insider trading. Healthcare institutions may face risks if deepfakes are used to manipulate medical records or impersonate medical professionals, potentially compromising patient care.'
The financial services sector appears particularly vulnerable in the GCC region, according to Woods.
'Financial services, including banking, digital wallets and lending, rely on verifying customer identities, making them prime targets for fraudsters,' he said. 'With diverse financial economies such as in the Middle East encouraging competition among digital banks and super apps, customer acquisition has become critical for balancing customer experience and risk management.'
Detection capabilities: A technological race
As deepfake technology continues to advance, detection methods are struggling to keep pace, creating a technological race between security systems and those seeking to exploit them.
'Detection technologies designed to combat deepfakes are advancing, but they are in a constant race against a threat that is always evolving,' Higgins said. 'As generative AI tools become more accessible and powerful, deepfakes are growing in realism and scale.'
He highlighted the limitations of current detection systems: 'While there are detection systems capable of detecting subtle inconsistencies in voice patterns, facial movements, and metadata, malicious actors continue to find ways to outpace them.'
Woods added: 'Organizations are just beginning to tackle the challenge of deepfakes and it is a race they must win. Countering AI-generated fraud, including deepfakes, demands AI-driven solutions capable of distinguishing real humans from deepfakes.'
Social media platforms' responsibility
The role of social media companies in addressing deepfake content remains a contentious issue, with experts calling for more robust measures to identify and limit the spread of malicious synthetic media.
'Social media platforms carry a critical responsibility in curbing the spread of malicious deepfakes. As the primary channels through which billions consume information globally, they are also the frontline where manipulated content is increasingly gaining traction,' Higgins said.
He acknowledged some progress while highlighting ongoing challenges: 'Some tech giants, including Meta, Google, and Microsoft, have begun introducing measures to label AI-generated content clearly – which are steps in the right direction. However, inconsistencies remain.'
Higgins pointed to specific platforms that may be exacerbating the problem: 'X (formerly Twitter) dismantled many of its verification safeguards in 2023, a move that has made public figures more vulnerable to impersonation and misinformation. This highlights a deeper issue: disinformation and sensationalism have, for some platforms, become embedded in their engagement-driven business models.'
Woods believes that while social media platforms are not responsible for the rise of deepfakes or malicious AI, irrespective of fraudsters' methods. However, these platforms can play a part in the solution, he said, adding collaboration through data-sharing initiatives between financial services, telecommunications and social media companies can significantly improve fraud prevention efforts.'
Public readiness and education
A particularly concerning aspect of the deepfake threat is the general public's limited ability to identify manipulated content, according to the experts.
'As the use of deepfakes spreads, the average internet user remains alarmingly unprepared to identify manipulated content,' Higgins said. 'Where synthetic media is becoming more and more realistic, simply trusting what we see or hear online is no longer an option.'
He advocated for a fundamental shift in how people approach digital content: 'Adopting a zero-trust mindset is key, and people must become accustomed to treating digital content with the same caution applied to suspicious emails or phishing scams.'
Woods agreed with this assessment, noting the difficulty even professionals face in identifying sophisticated deepfakes.
'Identifying deepfakes with the naked eye is challenging, even for trained professionals. People should be aware that deepfake technology is advancing quickly and not underestimate the tactics and tools available to fraudsters,' he said.
Practical advice for protection
Both experts offered practical guidance for individuals to protect themselves against deepfake-related scams, which often target emotional vulnerabilities.
'One common scenario involves fraudsters using deepfakes to imitate a distressed relative, claiming to need urgent financial help due to a lost phone or another emergency,' Woods explained.
He recommended several protective steps: 'Approach unexpected and urgent requests for money or personal information online with caution, even if they appear to come from a loved one or trusted source. Pause and consider whether it could be a scam. Verify the identity of the person by reaching out to them through a different method than the one they used to contact you.'
Higgins also emphasized the importance of education in combating the threat: 'Citizens must be encouraged to verify sources, limit public sharing of personal media, and critically assess the credibility of online content. Platforms, regulators, and educational institutions all have a role to play in equipping users with the tools and knowledge to navigate a digital landscape where not everything is as it seems.'
Regulatory frameworks
The experts agreed that regulatory frameworks addressing deepfake technology remain underdeveloped globally, despite the growing threat.
'The legal frameworks around deepfakes vary greatly across geographies and jurisdictions, sometimes creating a grey area between unethical manipulation and criminal activity,' Higgins pointed out. 'In Saudi Arabia, where laws around cybercrime are among the strictest in the Middle East, impersonation, defamation, and fraud through deepfakes may fall under existing regulations.'
Woods was more direct in his assessment of the current regulatory landscape: 'No global regulator has yet implemented a legal deterrent or regulatory framework to address the threat of deepfakes.'
Despite the serious nature of deepfake threats, the experts cautioned against complete alarmism, noting legitimate applications for the technology alongside its potential for harm.
'Not all deepfakes are bad and they do have a place in society, for example providing entertainment, gaming and augmented reality,' Woods said. 'However, as with any technological advancement, some people exploit these tools for malicious purposes.'
Higgins warned against dismissing the threat as overblown: 'Dismissing deepfakes as exaggerated or irrelevant underestimates one of the most disruptive threats faced today. While deepfake content may once have been a novelty, it has rapidly evolved into a tool capable of serious harm—targeting not just individuals or brands, but the very concept of truth.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

SLS Expo 2025 propels Vision 2030 infrastructure goals
SLS Expo 2025 propels Vision 2030 infrastructure goals

Arab News

time20 hours ago

  • Arab News

SLS Expo 2025 propels Vision 2030 infrastructure goals

The three days of Saudi Light and Sound Expo 2025 reinforced the event's position as a central platform for advancing Saudi Arabia's professional lighting, sound, and AV technology sector. With broad international participation and a focus on actionable business outcomes, SLS Expo continues to support the Kingdom's Vision 2030 goals by accelerating infrastructure development and building strategic industry capabilities. Global and regional stakeholders convened to explore new solutions and partnerships essential to Saudi Arabia's growing live event production market. The exhibition featured 127 exhibitors from 21 countries, including Saudi Arabia, the UAE, US, China, and Italy, showcasing technologies designed for high-performance venues, immersive experiences, and large-scale productions. Several significant agreements were signed, contributing to Saudi Arabia's infrastructure and capability-building objectives. The MoU signing between Saudi Entertainment and Events Management Academy and IPS marked the beginning of a strategic partnership aimed at enhancing talent development in the live events sector in Saudi Arabia, aligning its goals with the Kingdom's Vision 2030. Additional MoUs formalized as the event progressed, reflecting the Kingdom's broader strategy to attract global expertise, foster local skills development, and support the execution of major national projects. At the heart of the show's programming was the SLS Summit, supported by sponsors IPS, Shure, Procom, Venuetech, Proloab, RST, SONOS, ShowPower and Media Pro. More than 40 international speakers delivered expert-led sessions across topics including smart production, sustainable AV, immersive design, and next-generation venue infrastructure. One of the standout sessions, Stadiums 2.0, explored how integrated AV systems and architectural design are enabling a new era of live event experiences. 'What we saw at SLS Expo was not just about momentum, it was focused, results-driven engagement that's shaping the sector's future,' said Sarkis Kahwajian, associate vice president at dmg events. 'The MoUs already confirmed, and those in development, speak to the depth of commitment across the public and private sectors.' Hands-on features such as the Shure Wireless Workshop provided live audio professionals with opportunities to deepen their technical capabilities and engage in peer learning through practical demonstrations and real-time troubleshooting. SLS Expo is supported by the Kingdom's General Entertainment Authority and the Ministry of Investment of Saudi Arabia. The event continues to serve as a strategic business environment where government entities, technology providers, integrators, and venue operators align around a shared vision for growth. Co-located with the Saudi Entertainment and Amusement Expo, SLS offers a complete ecosystem for live event professionals seeking to connect, collaborate and contribute to the development of Saudi Arabia's AV and production sectors.

UK to boost ‘homegrown talent' in new AI skills drive
UK to boost ‘homegrown talent' in new AI skills drive

Arab News

timea day ago

  • Arab News

UK to boost ‘homegrown talent' in new AI skills drive

LONDON: UK Prime Minister Keir Starmer on Monday pledged to boost 'homegrown talent for the AI age' by teaming up with tech giants to train 7.5 million workers in artificial intelligence skills. Speaking at the start of London's Tech Week, with a line-up of speakers including Nvidia CEO Jensen Huang, Starmer said: 'In this global race, we can be an AI maker and not an AI taker.' Starmer was due to have a one-on-one conversation with the chief of the star Silicon Valley semiconductor firm whose chips are critical for artificial intelligence applications and research. Ahead of the event bringing together industry giants, Starmer announced a government-industry partnership to train 7.5 million workers in AI skills, including in using chatbots and large language models to boost productivity. Tech firms including Nvidia, Google, Microsoft and Amazon committed to make training materials freely available to businesses over the next five years. Google EMEA region President Debbie Weinstein called it a 'crucial initiative' essential for developing AI skills, unlocking AI-powered growth 'and cementing the UK's position as an AI leader.' In his opening speech, Starmer said Britain must build 'the digital infrastructure that we need to make sure AI improves our public services.' The UK has a 'responsibility' to 'harness this unprecedented opportunity and to use it to improve the lives of working people,' Starmer added. 'We are going to build more homes, more labs, more data centers, and we're going to do it much, much more quickly.' His government has pledged to fire up the UK's flagging economy, including with 'pro-growth' AI regulations to attract tech investment and turn Britain into an 'AI superpower.' 'We are putting the power of AI into the hands of the next generation — so they can shape the future, not be shaped by it,' Starmer said in a press release before the event. The British leader unveiled £187 million ($253 million) in funding to help develop tech abilities including training for one million secondary school students, as part of its 'TechFirst' program. He called it a 'step change in how we train homegrown talent for the AI age.' The investment will 'embed AI right through our education system,' he said, announcing nearly £150 million in undergraduate and PhD research scholarships in AI and tech. Starmer also announced a 'commitment from Nvidia to partner on a new AI talent pipeline,' including through expanding a Nvidia lab in Bristol, southwest England. The UK's AI sector is valued at £72 billion, employing over 64,000 people, and is projected to exceed £800 billion by 2035. It was growing 30 times faster than the rest of the economy, according to government figures from 2023 — an 'incredible' rate, according to Starmer. Other speakers at the tech conference include the CEO of Mistral AI, Arthur Mensch, the UK's Science Secretary Peter Kyle and Markus Villig, founder of ride-hailing app Bolt.

Saudi Arabia's CST reports network performance in 2025 Hajj
Saudi Arabia's CST reports network performance in 2025 Hajj

Argaam

timea day ago

  • Argaam

Saudi Arabia's CST reports network performance in 2025 Hajj

Saudi Arabia's Communications, Space & Technology Commission (CST) reported over 181.2 million local and international voice calls in Makkah, Madinah, and the holy sites during the 2025 Hajj season. The strong performance reflects advanced telecom infrastructure and coordinated efforts among service providers. The CST reported a 32% year-on-year (YoY) increase in mobile internet speeds during the 2025 Hajj, reaching 297 Mbps between May 28 and June 6. Daily data usage per person hit 1,259 MB, more than 2.9 times the global average, CST said in a statement sent by Argaam. The commission attributed the results to strong government support and effective coordination between the public and private sectors involved in the Hajj season. The Kingdom's advanced digital infrastructure played a key role in enhancing pilgrims' experience, health, safety, and security. Various tech innovations, including artificial intelligence (AI) tools used for crowd management, contributed to service quality. These technologies enabled authorities to deliver reliable and high-quality services during the pilgrimage. The commission said it mobilized all resources to ensure high-quality telecom and IT services, working closely with telecom providers. This year's AI-powered crowd control system, developed in partnership with Aramco Digital, was successfully deployed with support from the High Commission for Industrial Security, Saudi Water Authority, and Saudi Railways Co.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store