logo
AI Security Challenges: Deepfakes, Malware & More

AI Security Challenges: Deepfakes, Malware & More

TECHx15-05-2025

Home » Expert opinion » AI Security Challenges: Deepfakes, Malware & More
Check Point Research's AI Security Report uncovers how cybercriminals are weaponizing AI, from deepfakes and data poisoning to Dark LLMs, and what defenders must do to stay ahead.
As artificial intelligence becomes more deeply embedded in business operations, it's also reshaping how cyber threats evolve. The same technologies helping organizations improve efficiency and automate decision-making are now being co-opted and weaponized by threat actors.
The inaugural edition of the Check Point Research AI Security Report explores how cyber criminals are not only exploiting mainstream AI platforms, but also building and distributing tools specifically designed for malicious use. The findings highlight five growing threat categories that defenders must now account for when securing systems and users in an AI-driven world.
Get the AI Security Report now
AI Use and the Risk of Data Leakage
An analysis of data collected from Check Point's GenAI Protect reveals that 1 in every 80 GenAI prompts poses a high risk of sensitive data leakage. Data also shows that 7.5% of prompts, about 1 in 13, contain potentially sensitive information, introducing critical security, compliance, and data integrity challenges. As organizations increasingly integrate AI into their operations, understanding these risks is more important than ever.
AI-Enhanced Impersonation and Social Engineering
Social engineering remains one of the most effective attack vectors, and as AI evolves, so too do the techniques used by threat actors. Autonomous and interactive deepfakes are changing the game of social engineering, drastically improving the realism and scale of attacks. Text and audio have already evolved to generate non scripted, real time text, while video is only advancements away.
A recent FBI alert underscored the growing use of AI-generated content in fraud and deception, while real-world incidents, such as the impersonation of Italy's defense minister using AI-generated audio, have already caused significant financial harm.
As these capabilities scale, identity verification based on visual or auditory cues is becoming less reliable, prompting an urgent need for multi-layered identity authentication.
LLM Data Poisoning and Manipulation
Concerns have been raised by researchers regarding LLM (large language model) poisoning, which is a cyber security threat where training datasets are altered to include malicious content, causing AI models to replicate the harmful content. Despite the strong data validation measures in place by major AI providers like OpenAI and Google, there have been instances of successful poisoning attacks, including the upload of 100 compromised AI models to the Hugging Face platform. While data poisoning typically affects the training phase of AI models, new vulnerabilities have arisen as modern LLMs access real-time online information, leading to a risk known as 'retrieval poisoning.' A notable case involves the Russian disinformation network 'Pravda,' which created around 3.6 million articles in 2024 aimed at influencing AI chatbot responses. Research indicated that these chatbots echoed Pravda's false narratives about 33% of the time, underscoring the significant danger of using AI for disinformation purposes.
AI-Created Malware Creation and Data Mining
AI is now being used across the entire cyber attack lifecycle, from code generation to campaign optimization. Tools like FunkSec's AI-generated DDoS module and custom Chat-GPT- style chatbot demonstrate how ransomware groups are integrating AI into operations, not just for malware creation, but for automating public relations and campaign messaging.
AI is also playing a critical role in analyzing stolen data. Infostealers and data miners use AI to rapidly process and clean massive logs of credentials, session tokens, and API keys. This allows for faster monetization of stolen data and more precise targeting in future attacks. In one case, a dark web service called Gabbers Shop advertised the use of AI to improve the quality of stolen credentials, ensuring they were valid, organized, and ready for resale.
The Weaponization and Hijacking of AI Models
Threat actors are no longer just using AI, they are turning it into a dedicated tool for cyber crime. One key trend is the hijacking and commercialization of LLM accounts. Through credential stuffing and infostealer malware, attackers are collecting and reselling access to platforms like ChatGPT and OpenAI's API, using them to generate phishing lures, malicious scripts, and social engineering content without restriction.
Even more concerning is the rise of Dark LLMs, maliciously modified AI models such as HackerGPT Lite, WormGPT, GhostGPT, and FraudGPT. These models are created by jailbreaking ethical AI systems or modifying open-source models like DeepSeek. They are specifically designed to bypass safety controls and are marketed on dark web forums as hacking tools, often with subscription-based access and user support.
What This Means for Defenders
The use of AI in cyber crime is no longer theoretical. It's evolving in parallel with mainstream AI adoption, and in many cases, it's moving faster than traditional security controls can adapt. The findings in the AI Security Report from Check Point Research suggest that defenders must now operate under the assumption that AI will be used not just against them, but against the systems, platforms, and identities they trust.
Security teams should begin incorporating AI-aware defenses into their strategies, including AI-assisted detection, threat intelligence systems that can identify AI-generated artifacts, and updated identity verification protocols that account for voice, video, and textual deception.
As AI continues to influence every layer of cyber operations, staying informed is the first step toward staying secure.
By Vasily Dyagilev – Regional Director, Middle East & RCIS at Check Point Software Technologies Ltd

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

IntelliDent AI and Woxsen University forge strategic alliance to advance AI research, leadership & innovation in healthcare
IntelliDent AI and Woxsen University forge strategic alliance to advance AI research, leadership & innovation in healthcare

Zawya

time2 hours ago

  • Zawya

IntelliDent AI and Woxsen University forge strategic alliance to advance AI research, leadership & innovation in healthcare

IntelliDent AI, a Dubai-based healthtech company at the forefront of artificial intelligence solutions in dentistry, has signed a Memorandum of Understanding (MoU) with Woxsen University's AI Research Centre, Hyderabad, India. The five-year strategic partnership aims to drive collaborative innovation through joint research, live project exposure, and advanced technical training. This collaboration is designed to fuel co-innovation, AI-driven entrepreneurship, and cutting-edge research in digital healthcare. Under this agreement, IntelliDent AI and Woxsen University will engage in joint academic-industry initiatives focused on AI-driven healthcare solutions, consultancy-based projects, and the commercialization of technology innovations. The MoU outlines cooperation in the following key areas: Real-World AI Projects: IntelliDent will provide students from Woxsen's AI Research Centre hands-on exposure to live, real-time projects from the healthcare domain—preparing them to address real-world challenges with AI-driven solutions. Joint Research & Publications: Both institutions will collaborate on research grants, white papers, product prototypes, and academic publications that shape the future of medical technology. Entrepreneurship & Product Commercialization: Special focus will be placed on developing products from ideation to market readiness, with guidance on building startups and commercializing AI health tech solutions. Cross-Training & Knowledge Exchange: Faculty and industry professionals will exchange expertise in best practices, fostering a culture of mutual growth and continuous innovation. Strategic Awareness Initiatives: Co-hosted awareness programs, leadership bootcamps, and innovation challenges will promote ethical AI, responsible leadership, and entrepreneurship across emerging markets. Speaking on the occasion, Mr. Affaan Shaikh, Founder and CEO of IntelliDent AI, stated: 'This MoU with Woxsen isn't just about technology, it's about inspiring a generation of AI leaders who blend deep tech knowledge with entrepreneurial thinking and purpose-driven leadership. By embedding awareness, education, and innovation into every layer of this partnership, we are not only building smarter healthcare systems, but also empowering future changemakers.' The MoU was signed by Dr. Raul V. Rodriguez, Vice President of Woxsen University, and Mr. Affaan Shaikh, with oversight by Dr. Hemachandran K, Director of the AI Research Centre, and Mr. Khalid Shaikh, Advisor to IntelliDent AI. This milestone underscores IntelliDent AI's commitment to shaping global talent in healthcare AI and highlights Woxsen's ongoing mission to blend business, technology, and innovation through meaningful industry collaborations. Together, IntelliDent AI and Woxsen University are setting the foundation for impactful, future-forward education —where research, leadership, and real-world problem-solving converge.

Executive education: Accenture's Abir Habbal on preparing AI-savvy leaders
Executive education: Accenture's Abir Habbal on preparing AI-savvy leaders

Gulf Business

time11 hours ago

  • Gulf Business

Executive education: Accenture's Abir Habbal on preparing AI-savvy leaders

Image: Supplied The AI revolution isn't coming — it's already here. But for most business leaders, bridging the gap between AI hype and actionable strategy remains a challenge. Enter the Generative AI Scholars Programme, a joint initiative by Accenture and Stanford University, now being rolled out across the Middle East. In this interview, Abir Habbal, Data and AI Strategy & Consulting lead at Accenture in the Middle East, explains what makes this programme more than just another executive course. From bite-sized modules grounded in Stanford's academic legacy to real-world applications tailored for the region's ambitious digital visions, this programme is designed to turn C-suite curiosity into capability. Habbal also shares why the UAE and Saudi Arabia are ideal launchpads, highlights the top misconceptions around AI adoption, and offers a glimpse into the real business impact already emerging from this new generation of AI-savvy leaders. In your view, what role can executive education like this play in accelerating the region's digital transformation — and what's still needed to close the gap between ambition and impact? Executive education programmes are instrumental in accelerating the Middle East's AI transformation by comprehensively equipping leaders for the AI era. They are crucial for building AI-literate leadership, empowering decision-makers across the GCC to not only understand AI's potential but also to strategically apply it within their organisations. This involves a fundamental shift in mindset: moving from viewing AI purely as an IT function to recognsing it as a catalyst for business model transformation and sustainable growth. By fostering AI fluency, these programmes bridge the communication gap between the c-suite and technical teams, enabling more productive dialogues and streamlined decision-making, which is vital for the region's ambitious national AI strategies. However, to fully close the gap between this ambition and tangible impact, several critical elements are still needed. Beyond leadership, there must be a broader investment in role-based AI training across all levels of the workforce, ensuring everyone understands how AI integrates into their daily tasks and contributes to organisational goals. Furthermore, the emphasis on responsible AI must be woven into the fabric of every initiative, with clear governance models addressing data privacy, bias, and transparency from inception, as this builds crucial trust for widespread adoption. Ultimately, sustained success hinges on a commitment to scaling what works, fostering a culture of continuous learning and experimentation, and aligning AI innovation directly with national economic diversification and digital transformation priorities across the region. There's growing awareness of AI across industries — but a notable gap when it comes to implementation. What are the most common misconceptions or barriers you see among business leaders trying to adopt AI? Despite high awareness, a significant gap exists between AI understanding and real-world implementation among business leaders. Accenture's research highlights that while 84 per cent of c-suite executives believe they must leverage AI to achieve their growth objectives, only 15 per cent feel their organization is truly ready to scale it. The most common misconceptions and barriers include: Uncertainty about where to start: The rapidly evolving AI landscape makes it challenging to differentiate hype from achievable business value. Lack of leadership alignment: If c-suite leaders (CIO, CFO, CHRO) don't operate from a shared understanding, AI initiatives often stall at the pilot stage. Organisational unreadiness: Many businesses lack the foundational data infrastructure, skilled talent, or robust governance frameworks needed for responsible AI scaling. As Accenture's research indicates, 70-80 per cent of AI initiatives never move beyond the pilot phase, and while many focus on technical capabilities, successful AI implementation is predominantly a people and process challenge. Misconception of AI as purely an IT initiative: Leaders often fail to see AI as a core strategic imperative for growth, innovation, and competitiveness, viewing it simply as a technological tool rather than a catalyst for business model transformation. Risk avoidance over responsible risk-taking: Concerns around ethics, trust, and governance, while valid, can lead to a paralysis of innovation if not balanced with a strategy of embracing innovation with built-in guardrails and responsible AI practices from day one. Lack of clarity on ROI: A significant barrier is establishing ROI on identified opportunities and making a business case for scaling initiatives, which are often perceived as more challenging than technical limitations Can you walk us through what makes this programme distinct from other executive education offerings — particularly in how it blends Stanford's academic insights with real-world application? The programme includes easily digestible, bite-sized modules, industry spotlights, case studies, and reflection activities, ensuring participants not only grasp complex concepts like technical fundamentals, foundation models, and prompt engineering but also develop the generative AI strategy and technology know-how for real-world application. This approach aims to spark reinvention agendas that can profoundly transform businesses, enabling leaders to drive innovation and navigate the digital economy effectively. Why was the Middle East chosen as the next region for the rollout of this programme, and how has the regional business landscape influenced its evolution or delivery? The Middle East, specifically Saudi Arabia and the UAE have emerged as prime focus for the rollout of programmes like the Generative AI Scholars Program due to the country visions and their unparalleled ambition and strategic commitment to becoming global leaders in AI. This region is not merely adopting AI; it's actively leading its development and integration into national visions. Both nations are making substantial financial commitments to AI infrastructure, research centers, and digital ecosystems. This includes significant government-backed AI R&D funds, free zones offering incentives for AI businesses, and partnerships with global cloud technology organisations. The region is not just investing in technology but also in building a modern digital core, which Accenture sees as essential for continuous reinvention and for organisations to rapidly seize every opportunity presented by AI. Recognisng that technology adoption requires human capital, these nations are heavily investing in developing AI fluency across their workforces, from top leadership to technical teams. We are working with MCIT Saudi Arabia in an Accenture artificial intelligence training programme organised by the Accenture LearnVantage Academy. These programmes directly address human capital requirements by equipping senior officials and business leaders with the mindset and skills needed to lead with AI responsibly. As the programme moves from theory to action, what kind of real-world business outcomes are participants expected to achieve? Can you share any early examples from previous rollouts globally? The expected real-world business outcomes are centered on driving measurable value and competitive advantage. Accenture identifies these outcomes as broader and more strategic than just cost savings. Participants are expected to achieve: Enhanced decision-making: AI tools help leadership teams make faster, more informed decisions, leading to improved strategic agility. Increased employee productivity and empowerment: By leveraging AI, leaders can free up time for their teams to focus on high-impact work rather than being buried in manual analysis or reporting. This contributes to a positive human-AI relationship, which Accenture believes is a key priority for leaders. Tangible business impact: Strategies informed by AI insights are expected to translate into winning new business, improving customer satisfaction, and accelerating the launch of initiatives. Accenture's AI Achievers report indicates that 63 per cent of high-performing companies say they've already achieved measurable ROI from their AI investments within three years. Accelerated organistional agility: AI helps organisations react faster to market changes, or spot opportunities we would've missed before, fostering a state of continuous reinvention. Business model transformation: Rather than just incremental gains, organizations are expected to achieve step-change improvements in revenue, efficiency, and customer experience by integrating AI effectively, as leaders rethink how digital systems are designed, how people work, and how they create products and interact with customers. Innovation at scale: The programme aims to unlock creativity and accelerate progress, sector by sector, by empowering people to reimagine what's possible with AI.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store