logo
#

Latest news with #CheckPointResearch

Check Point boosts Quantum Force with AI security update
Check Point boosts Quantum Force with AI security update

Techday NZ

time28-05-2025

  • Business
  • Techday NZ

Check Point boosts Quantum Force with AI security update

Check Point has announced significant enhancements to its Quantum Force platform, including an automatic upgrade that delivers a 15%-25% performance boost in threat prevention throughput for all Quantum Force Hybrid Mesh firewalls, as well as the introduction of AI-powered security appliances designed for branch offices. The updates are designed to address growing demands for software-driven security solutions and the increasing threat landscape facing enterprise networks and branch locations. The performance boosts, delivered automatically via software updates, enhance existing security infrastructure without requiring hardware changes. Check Point's new Quantum Force Branch Office Security Gateways offer up to four times the threat prevention performance of previous generations, aiming to meet the security requirements of distributed and hybrid enterprise networks. The four new branch models are engineered to handle rising attack rates on branch offices, which, according to Check Point Research, now experience an average of 713 weekly attack attempts per location, a 36% increase from last year. Additionally, 50% of branch offices reportedly encounter efforts to exploit vulnerabilities from external sources, underlining the importance of robust branch security. "As we continue to prioritise innovation and efficiency, Check Point's new Quantum Force Branch Office Security Gateway firewalls are built for speed, simplicity, and security. They're 4x faster than previous models, optimised for SD-WAN, and backed by our latest AI-powered threat prevention. And with automatic performance upgrades, existing Quantum Force customers will receive a 15-25% performance boost with a software update — no hardware changes required," said Nataly Kremer, Chief Product Officer at Check Point. The branch office appliances are designed to provide a 99.9% block rate for threats, as verified in Miercom's 2025 security benchmark report, deliver improved security for cloud applications, and offer increased connectivity and port capacity. With the adoption of SD-WAN technology and the expansion of remote work, these features are poised to enhance branch office security, making it more resilient and responsive to changing operational needs. Check Point points to findings in its CPR 2025 Security Report, which shows a 44% annual rise in cyberattacks, reflecting the intensification of the security environment for branch locations. The company has designed the new appliances to maintain strong security without impacting network performance or user productivity, a crucial factor for locations that engage in direct customer interactions. "World Wide Technology (WWT) provides security products and services to customers across a variety of industries, including financial services, manufacturing, retail and healthcare with distributed branch offices. Check Point's new next-generation Quantum Force Branch Office Security Gateways with enhanced AI powered threat prevention, empower us to protect these customers from the latest attacks on branch offices. These innovations help our clients reduce risk, streamline operations, and scale securely across hybrid environments — turning cyber resilience into a competitive advantage," Chris Konrad, Vice President of Global Cyber at World Wide Technology (WWT), said, commenting on the new offerings. The company has also released a new generation of Quantum Smart-1 Management Appliances, featuring a twofold increase in managed gateway capacity and up to 70% higher log processing rates. These management solutions are intended to centralise and automate security operations across hybrid environments through AI-powered tools and policy orchestration. "Security teams today face more pressure than ever — from rising AI-generated threats to managing fragmented infrastructures. Our new Quantum Smart-1 Management Appliances simplify that complexity. Our new Quantum Smart-1 Management Appliances combine AI, speed, precision, and automation to help organisations manage on-premise, cloud, and distributed IT deployments — faster and smarter," said Nataly Kremer, Chief Product Officer at Check Point. The seventh-generation Smart-1 appliances offer local storage scaling up to 70TB for compliance requirements and support management for up to 10,000 gateways. This architecture is designed to combine unified management across on-premises, cloud, and remote deployments, with integration for over 250 third-party solutions. "The Check Point Infinity Platform demonstrated superior security efficacy, consistently outperforming its peers in the test category of comprehensive threat prevention and response, as well as excelling in the AI-powered testing scenarios. Its AI-driven architecture, hybrid mesh deployment model, and unified security operations prove that Check Point is setting the pace for next-generation cyber security," Rob Smithers, CEO at Miercom, said, highlighting the platform's performance in recent testing. "Branch offices are often the soft spots in enterprise security, providing vulnerable entry-points for attacks and compromising the security posture across the enterprise. Check Point's new Quantum Branch Office Security Gateways deliver robust threat prevention to the edge, enabling organisations to secure their branch offices from emerging cyber threats while keeping pace with the demands of the hybrid workforce," Pete Finalle, Security Research Manager at IDC, noted the importance of edge security. Check Point's Quantum Force Branch Office Security Gateways and Smart-1 Management Appliances are currently available through its network of partners worldwide.

AI Security Challenges: Deepfakes, Malware & More
AI Security Challenges: Deepfakes, Malware & More

TECHx

time15-05-2025

  • TECHx

AI Security Challenges: Deepfakes, Malware & More

Home » Expert opinion » AI Security Challenges: Deepfakes, Malware & More Check Point Research's AI Security Report uncovers how cybercriminals are weaponizing AI, from deepfakes and data poisoning to Dark LLMs, and what defenders must do to stay ahead. As artificial intelligence becomes more deeply embedded in business operations, it's also reshaping how cyber threats evolve. The same technologies helping organizations improve efficiency and automate decision-making are now being co-opted and weaponized by threat actors. The inaugural edition of the Check Point Research AI Security Report explores how cyber criminals are not only exploiting mainstream AI platforms, but also building and distributing tools specifically designed for malicious use. The findings highlight five growing threat categories that defenders must now account for when securing systems and users in an AI-driven world. Get the AI Security Report now AI Use and the Risk of Data Leakage An analysis of data collected from Check Point's GenAI Protect reveals that 1 in every 80 GenAI prompts poses a high risk of sensitive data leakage. Data also shows that 7.5% of prompts, about 1 in 13, contain potentially sensitive information, introducing critical security, compliance, and data integrity challenges. As organizations increasingly integrate AI into their operations, understanding these risks is more important than ever. AI-Enhanced Impersonation and Social Engineering Social engineering remains one of the most effective attack vectors, and as AI evolves, so too do the techniques used by threat actors. Autonomous and interactive deepfakes are changing the game of social engineering, drastically improving the realism and scale of attacks. Text and audio have already evolved to generate non scripted, real time text, while video is only advancements away. A recent FBI alert underscored the growing use of AI-generated content in fraud and deception, while real-world incidents, such as the impersonation of Italy's defense minister using AI-generated audio, have already caused significant financial harm. As these capabilities scale, identity verification based on visual or auditory cues is becoming less reliable, prompting an urgent need for multi-layered identity authentication. LLM Data Poisoning and Manipulation Concerns have been raised by researchers regarding LLM (large language model) poisoning, which is a cyber security threat where training datasets are altered to include malicious content, causing AI models to replicate the harmful content. Despite the strong data validation measures in place by major AI providers like OpenAI and Google, there have been instances of successful poisoning attacks, including the upload of 100 compromised AI models to the Hugging Face platform. While data poisoning typically affects the training phase of AI models, new vulnerabilities have arisen as modern LLMs access real-time online information, leading to a risk known as 'retrieval poisoning.' A notable case involves the Russian disinformation network 'Pravda,' which created around 3.6 million articles in 2024 aimed at influencing AI chatbot responses. Research indicated that these chatbots echoed Pravda's false narratives about 33% of the time, underscoring the significant danger of using AI for disinformation purposes. AI-Created Malware Creation and Data Mining AI is now being used across the entire cyber attack lifecycle, from code generation to campaign optimization. Tools like FunkSec's AI-generated DDoS module and custom Chat-GPT- style chatbot demonstrate how ransomware groups are integrating AI into operations, not just for malware creation, but for automating public relations and campaign messaging. AI is also playing a critical role in analyzing stolen data. Infostealers and data miners use AI to rapidly process and clean massive logs of credentials, session tokens, and API keys. This allows for faster monetization of stolen data and more precise targeting in future attacks. In one case, a dark web service called Gabbers Shop advertised the use of AI to improve the quality of stolen credentials, ensuring they were valid, organized, and ready for resale. The Weaponization and Hijacking of AI Models Threat actors are no longer just using AI, they are turning it into a dedicated tool for cyber crime. One key trend is the hijacking and commercialization of LLM accounts. Through credential stuffing and infostealer malware, attackers are collecting and reselling access to platforms like ChatGPT and OpenAI's API, using them to generate phishing lures, malicious scripts, and social engineering content without restriction. Even more concerning is the rise of Dark LLMs, maliciously modified AI models such as HackerGPT Lite, WormGPT, GhostGPT, and FraudGPT. These models are created by jailbreaking ethical AI systems or modifying open-source models like DeepSeek. They are specifically designed to bypass safety controls and are marketed on dark web forums as hacking tools, often with subscription-based access and user support. What This Means for Defenders The use of AI in cyber crime is no longer theoretical. It's evolving in parallel with mainstream AI adoption, and in many cases, it's moving faster than traditional security controls can adapt. The findings in the AI Security Report from Check Point Research suggest that defenders must now operate under the assumption that AI will be used not just against them, but against the systems, platforms, and identities they trust. Security teams should begin incorporating AI-aware defenses into their strategies, including AI-assisted detection, threat intelligence systems that can identify AI-generated artifacts, and updated identity verification protocols that account for voice, video, and textual deception. As AI continues to influence every layer of cyber operations, staying informed is the first step toward staying secure. By Vasily Dyagilev – Regional Director, Middle East & RCIS at Check Point Software Technologies Ltd

Google's Gmail Password Attack Warning — You Have Just 7 Days To Act
Google's Gmail Password Attack Warning — You Have Just 7 Days To Act

Forbes

time03-05-2025

  • Forbes

Google's Gmail Password Attack Warning — You Have Just 7 Days To Act

Gmail users told they have 7 days to respond to password hack attacks. Update, May 3, 2025: This story, originally published May 1, has been updated with a new report regarding the use of passkeys in the face of weak password use, as well as details of AI-powered threats that email users need to be aware of as Gmail password hackers attack. It can't have escaped your attention that May 1 is World Password Day, when security experts and public relations organizations compete to see who can create the most ridiculous password-related stories to feed to the media and public alike. Yes, I'm cynical about the whole charade, as we should be taking password security seriously all year and not just on a designated day — preferably getting rid of them altogether and shifting to the more secure passkey option. It can't have escaped your attention that users of the world's most popular free email platform, Gmail, have been under attack from hackers who seek to compromise passwords and gain access to the valuable data that a Google account can hold. So, dear reader, my password story for May 1 has less to do with making your password stronger and everything to do with getting access to your Gmail account back after a Gmail password hacker has compromised it and locked you out. Google has said you have seven days — yes, a whole week — in which you can get that access back even if the attacker has changed your recovery telephone number. As you might imagine, given my experiences as a hacker and the fact that I have been writing about cybersecurity matters for more than 30 years now, I receive a lot of emails and messages from people who have fallen victim to attacks and are looking for help. By far the most common of these pleas for help is along the lines of 'Gmail password hackers have compromised my account, changed the recovery options, password, two-factor authentication method, and locked me out — what the heck can I do?' Unfortunately, these kinds of password-hacking compromises against Gmail users have become increasingly popular as threat actors of all types employ AI-driven attacks to access those highly valuable email accounts. Read on to discover how some of these AI attacks are evolving, as details emerge in a new Check Point Research report. But first, and rather fortunately, Google is fighting back when it comes to offering both protection against these increasingly sophisticated attackers and help in recovering accounts if a user has fallen victim. As long as you have had the forethought to provide a recovery telephone number or email address before the attack took place, then you have seven days in which you can regain access to your hacked Gmail account even if the attacker has changed them. Everyone uses a seatbelt when driving or being driven because it has been proven to dramatically improve safety and reduce the chances of fatality if involved in an accident when compared to not wearing one. Now replace seatbelt with recovery options, car with Gmail account, and accident with incident to arrive at a similar conclusion: having a recovery telephone number in place improves your chances of getting your account back if a hacker attacks. Likewise, using a phishing-resistant authentication technology, such as a passkey, instead of a password decreases the likelihood of an attacker being successful in the first place. To continue the motoring analogy, a passkey is like a car protected by driveway bollards and a remote kill switch rather than parking on the street and relying on an easily bypassed door lock. 'We recommend all users to set up a recovery phone as well as a recovery email on their account,' Gmail spokesperson Ross Richendrfer told me, 'these can be used in cases where users forget their own passwords, or an attacker changes the credentials after hijacking the account.' And therein lies the rub for any hacker: if you are the original account holder, despite the best efforts of an attacker to lock you out of your own account by changing all the security options, you can get access back as long as you act within seven days. 'Our automated account recovery process allows a user to use their original recovery factors for up to 7 days after it changes,' Richendrfer said, 'provided they set them up before the incident.' If you have found yourself locked out of your account following a Gmail password hack attack, Richendrfer said you can refer to the 'How to recover your Google account or Gmail' guidebook for step-by-step instructions on what to do next. The Fast Identity Alliance, better known within and without the cybersecurity industry as FIDO, has a proud 13-year tradition of addressing the issues that creating and managing passwords bring to the threat landscape. It was formed in 2012 specifically to find a better way when it came to strong authentication technology, especially seeing as a lack of interoperability between players was such an issue. What FIDO does is create authentication standards that, in its own words, 'define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords.' FIDO, then, knows a thing or two about the password problem and how it can best be mitigated. Which is why you should take note when it issues the results of a new global survey into understanding how consumer attitudes towards passkeys are evolving. The FIDO report got off to a sadly all too familiar and yet still shocking start: across the last 12 months, it said, some 35% of people have had at least one of their accounts hacked due to a password compromise of one kind or another. Let that sink in for a bit: 35%. That's a lot of people and a heck of a lot of passwords. OK, so that's the bad news; the good news is that passkeys are coming to the rescue, and consumers have started accepting them in greater numbers than ever before. The FIDO research found that nearly three-quarters, 74%, of those surveyed were now aware of what passkeys were. Of these, more than half, 53%, quite correctly considered them to be more secure than passwords, and, importantly, 54% said the same about usability. It should come as no surprise, then, that 69% had already enabled a passkey to protect at least one of their online accounts. The news continues to get better, and it's news that should be hailed as promising to make Gmail accounts more secure over the coming year, as 38% of those consumers who have used a passkey already said they now do so for every account that enables them. That, according to FIDO, some 48% of the world's top 100 websites now support integrated passkey use, is also great news for anyone who cares about security. It would be better, of course, if the number were 100% and not just the world's top 100 websites, but for all sites and services. Andrew Shikiar, executive director and CEO of the FIDO Alliance, said, that 'organizations of all shapes and sizes are taking action upon the imperative to move away from relying on passwords and other legacy authentication methods that have led to decades of data breaches, account takeovers and user frustration, which imperil the very foundations of our connected society.' Google, which also means Gmail, is among those at the forefront of passkey protection availability. The strongest level of protection you can give to your Google account is to enroll in the Advanced Protection Program, which adds several layers to safeguard the security of those most at risk from Gmail hackers. The service has been open to all users, not just journalists, activists and politicians, for some years now. In 2024, Google announced that it was making enrollment even more attractive for a wider audience by doing away with the need to purchase a hardware security key and instead enabling the use of passkeys. Shuvo Chatterjee, the product lead of Google's Advanced Protection Program, said at the time that passkeys are 'phishing resistant, so users are provided protection against things like fraudulent emails.' Chatterjee wasn't wrong then, and isn't now. When you sign into your Google account on any device, you will need your passkey. This will stop a hacker, even one in possession of your username and password credentials, from signing in and compromising your Google services, including your Gmail account. Unless they have your passkey, which they don't, they simply cannot come in. To access your passkey, an attacker would need the device it is enrolled on and the means to access it by way of your biometrics or PIN code. So, what are you waiting for? Analysts at Check Point Research have published details of AI-powered threats, no longer theoretical and very much right here and evolving rapidly, that put your Gmail password at risk. 'As access to Al tools becomes more widespread,' Lotem Finkelstein, director of Check Point Research, said, 'threat actors exploit this shift in two key ways: by leveraging Al to enhance their capabilities and targeting organizations and individuals adopting Al technologies.' It's the former that I'm concerned about in the context of this article about losing control of your Gmail account. It should go without saying, however, that the same AI threats apply to whatever email platform you use, and beyond to most online service accounts in fact. The use of social engineering is the de facto tactic employed by most attackers looking to compromise a Gmail email account. Indeed, even those attacks that are looking to exploit a known security vulnerability will often begin by exploiting human nature first. These social engineering, or phishing, if you prefer, attacks will leverage every possible media type to convince the victim it is a genuine communication that needs to be interacted with as a matter of urgency. Be it by way of text, audio, or imagery, the phishing attacker will employ it. The problem is, as Check Point Research said, 'with recent advancements in AI, attackers can create authentic-looking materials at scale, conduct automated chats, and hold real-time audio and video conferences while impersonating others.' No wonder so many people are taken in, and so many passwords get compromised, leading to a Gmail account lockout. The Check Point Report warned that AI-driven tools now proliferate on criminal forums, on the dark web, and in surface web criminal forums, leading to a critical compromise of our ability to rely upon audio and visual clues to determine fact from fiction. 'Fully autonomous audio deepfake tools for large-scale phone scams are already available,' Check Point said, 'meaning that recognizing a familiar face or voice is no longer sufficient proof of identity; instead, interactions must be reinforced by additional authentication measures.' Don't let Gmail password hackers lock you out of your account. Be alert to every communication and question everything — no matter how realistic it looks or sounds.

Gmail Password Warning — You Have 7 Days To Act, Google Says
Gmail Password Warning — You Have 7 Days To Act, Google Says

Forbes

time02-05-2025

  • Forbes

Gmail Password Warning — You Have 7 Days To Act, Google Says

Update, May 2, 2025: This story, originally published May 1, has been updated with details of AI-powered threats that email users need to be aware of as Gmail password hackers attack. It can't have escaped your attention that May 1 is World Password Day, when security experts and public relations organizations compete to see who can create the most ridiculous password-related stories to feed to the media and public alike. Yes, I'm cynical about the whole charade, as we should be taking password security seriously all year and not just on a designated day — preferably getting rid of them altogether and shifting to the more secure passkey option. It can't have escaped your attention that users of the world's most popular free email platform, Gmail, have been under attack from hackers who seek to compromise passwords and gain access to the valuable data that a Google account can hold. So, dear reader, my password story for May 1 has less to do with making your password stronger and everything to do with getting access to your Gmail account back after a Gmail password hacker has compromised it and locked you out. Google has said you have seven days — yes, a whole week — in which you can get that access back even if the attacker has changed your recovery telephone number. As you might imagine, given my experiences as a hacker and the fact that I have been writing about cybersecurity matters for more than 30 years now, I receive a lot of emails and messages from people who have fallen victim to attacks and are looking for help. By far the most common of these pleas for help is along the lines of 'Gmail password hackers have compromised my account, changed the recovery options, password, two-factor authentication method, and locked me out — what the heck can I do?' Unfortunately, these kinds of password-hacking compromises against Gmail users have become increasingly popular as threat actors of all types employ AI-driven attacks to access those highly valuable email accounts. Read on to discover how some of these AI attacks are evolving, as details emerge in a new Check Point Research report. But first, and rather fortunately, Google is fighting back when it comes to offering both protection against these increasingly sophisticated attackers and help in recovering accounts if a user has fallen victim. As long as you have had the forethought to provide a recovery telephone number or email address before the attack took place, then you have seven days in which you can regain access to your hacked Gmail account even if the attacker has changed them. Everyone uses a seatbelt when driving or being driven because it has been proven to dramatically improve safety and reduce the chances of fatality if involved in an accident when compared to not wearing one. Now replace seatbelt with recovery options, car with Gmail account, and accident with incident to arrive at a similar conclusion: having a recovery telephone number in place improves your chances of getting your account back if a hacker attacks. Likewise, using a phishing-resistant authentication technology, such as a passkey, instead of a password decreases the likelihood of an attacker being successful in the first place. To continue the motoring analogy, a passkey is like a car protected by driveway bollards and a remote kill switch rather than parking on the street and relying on an easily bypassed door lock. 'We recommend all users to set up a recovery phone as well as a recovery email on their account,' Gmail spokesperson Ross Richendrfer told me, 'these can be used in cases where users forget their own passwords, or an attacker changes the credentials after hijacking the account.' And therein lies the rub for any hacker: if you are the original account holder, despite the best efforts of an attacker to lock you out of your own account by changing all the security options, you can get access back as long as you act within seven days. 'Our automated account recovery process allows a user to use their original recovery factors for up to 7 days after it changes,' Richendrfer said, 'provided they set them up before the incident.' If you have found yourself locked out of your account following a Gmail password hack attack, Richendrfer said you can refer to the 'How to recover your Google account or Gmail' guidebook for step-by-step instructions on what to do next. Analysts at Check Point Research have published details of AI-powered threats, no longer theoretical and very much right here and evolving rapidly, that put your Gmail password at risk. 'As access to Al tools becomes more widespread,' Lotem Finkelstein, director of Check Point Research, said, 'threat actors exploit this shift in two key ways: by leveraging Al to enhance their capabilities and targeting organizations and individuals adopting Al technologies.' It's the former that I'm concerned about in the context of this article about losing control of your Gmail account. It should go without saying, however, that the same AI threats apply to whatever email platform you use, and beyond to most online service accounts in fact. The use of social engineering is the de facto tactic employed by most attackers looking to compromise a Gmail email account. Indeed, even those attacks that are looking to exploit a known security vulnerability will often begin by exploiting human nature first. These social engineering, or phishing, if you prefer, attacks will leverage every possible media type to convince the victim it is a genuine communication that needs to be interacted with as a matter of urgency. Be it by way of text, audio, or imagery, the phishing attacker will employ it. The problem is, as Check Point Research said, 'with recent advancements in AI, attackers can create authentic-looking materials at scale, conduct automated chats, and hold real-time audio and video conferences while impersonating others.' No wonder so many people are taken in, and so many passwords get compromised, leading to a Gmail account lockout. The Check Point Report warned that AI-driven tools now proliferate on criminal forums, on the dark web, and in surface web criminal forums, leading to a critical compromise of our ability to rely upon audio and visual clues to determine fact from fiction. 'Fully autonomous audio deepfake tools for large-scale phone scams are already available,' Check Point said, 'meaning that recognizing a familiar face or voice is no longer sufficient proof of identity; instead, interactions must be reinforced by additional authentication measures.' Don't let Gmail password hackers lock you out of your account. Be alert to every communication and question everything — no matter how realistic it looks or sounds.

AI security report warns of rising deepfakes & Dark LLM threat
AI security report warns of rising deepfakes & Dark LLM threat

Techday NZ

time01-05-2025

  • Techday NZ

AI security report warns of rising deepfakes & Dark LLM threat

Check Point Research has released its inaugural AI Security Report, detailing how artificial intelligence is affecting the cyber threat landscape, from deepfake attacks to generative AI-driven cybercrime and defences. The report explores four main areas where AI is reshaping both offensive and defensive actions in cyber security. According to Check Point Research, one in 80 generative AI prompts poses a high risk of sensitive data leakage, with one in 13 containing potentially sensitive information that could be exploited by threat actors. The study also highlights incidents of AI data poisoning linked to disinformation campaigns, as well as the proliferation of so-called 'Dark LLMs' such as FraudGPT and WormGPT. These large language models are being weaponised for cybercrime, enabling attackers to bypass existing security protocols and carry out malicious activities at scale. Lotem Finkelstein, Director of Check Point Research, commented on the rapid transformation underway, stating, "The swift adoption of AI by cyber criminals is already reshaping the threat landscape. While some underground services have become more advanced, all signs point toward an imminent shift - the rise of digital twins. These aren't just lookalikes or soundalikes, but AI-driven replicas capable of mimicking human thought and behaviour. It's not a distant future - it's just around the corner." The report examines how AI is enabling attackers to impersonate and manipulate digital identities, diminishing the boundary between what is authentic and fake online. The first threat identified is AI-enhanced impersonation and social engineering. Threat actors are now using AI to generate convincing phishing emails, audio impersonations, and deepfake videos. In one case, attackers successfully mimicked Italy's defence minister with AI-generated audio, demonstrating the sophistication of current techniques and the difficulty in verifying online identities. Another prominent risk is large language model (LLM) data poisoning and disinformation. The study refers to an example involving Russia's disinformation network Pravda, where AI chatbots were found to repeat false narratives 33% of the time. This trend underscores the growing risk of manipulated data feeding back into public discourse and highlights the challenge of maintaining data integrity in AI systems. The report also documents the use of AI for malware development and data mining. Criminal groups are reportedly harnessing AI to automate the creation of tailored malware, conduct distributed denial-of-service (DDoS) campaigns, and process stolen credentials. Notably, services like Gabbers Shop are using AI to validate and clean stolen data, boosting its resale value and targeting efficiency on illicit marketplaces. A further area of risk is the weaponisation and hijacking of AI models themselves. Attackers have stolen LLM accounts or constructed custom Dark LLMs, such as FraudGPT and WormGPT. These advanced models allow actors to circumvent standard safety mechanisms and commercialise AI as a tool for hacking and fraud, accessible through darknet platforms. On the defensive side, the report makes it clear that organisations must now presume that AI capabilities are embedded within most adversarial campaigns. This shift in assumption underlines the necessity for a revised approach to cyber defence. Check Point Research outlines several strategies for defending against AI-driven threats. These include using AI-assisted detection and threat hunting to spot synthetic phishing content and deepfakes, and adopting enhanced identity verification techniques that go beyond traditional methods. Organisations are encouraged to implement multi-layered checks encompassing text, voice, and video, recognising that trust in digital identity can no longer be presumed. The report also stresses the importance of integrating AI context into threat intelligence, allowing cyber security teams to better recognise and respond to AI-driven tactics. Lotem Finkelstein added, "In this AI-driven era, cyber security teams need to match the pace of attackers by integrating AI into their defences. This report not only highlights the risks but provides the roadmap for securing AI environments safely and responsibly."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store