logo
#

Latest news with #WormGPT

AI security report warns of rising deepfakes & Dark LLM threat
AI security report warns of rising deepfakes & Dark LLM threat

Techday NZ

time01-05-2025

  • Techday NZ

AI security report warns of rising deepfakes & Dark LLM threat

Check Point Research has released its inaugural AI Security Report, detailing how artificial intelligence is affecting the cyber threat landscape, from deepfake attacks to generative AI-driven cybercrime and defences. The report explores four main areas where AI is reshaping both offensive and defensive actions in cyber security. According to Check Point Research, one in 80 generative AI prompts poses a high risk of sensitive data leakage, with one in 13 containing potentially sensitive information that could be exploited by threat actors. The study also highlights incidents of AI data poisoning linked to disinformation campaigns, as well as the proliferation of so-called 'Dark LLMs' such as FraudGPT and WormGPT. These large language models are being weaponised for cybercrime, enabling attackers to bypass existing security protocols and carry out malicious activities at scale. Lotem Finkelstein, Director of Check Point Research, commented on the rapid transformation underway, stating, "The swift adoption of AI by cyber criminals is already reshaping the threat landscape. While some underground services have become more advanced, all signs point toward an imminent shift - the rise of digital twins. These aren't just lookalikes or soundalikes, but AI-driven replicas capable of mimicking human thought and behaviour. It's not a distant future - it's just around the corner." The report examines how AI is enabling attackers to impersonate and manipulate digital identities, diminishing the boundary between what is authentic and fake online. The first threat identified is AI-enhanced impersonation and social engineering. Threat actors are now using AI to generate convincing phishing emails, audio impersonations, and deepfake videos. In one case, attackers successfully mimicked Italy's defence minister with AI-generated audio, demonstrating the sophistication of current techniques and the difficulty in verifying online identities. Another prominent risk is large language model (LLM) data poisoning and disinformation. The study refers to an example involving Russia's disinformation network Pravda, where AI chatbots were found to repeat false narratives 33% of the time. This trend underscores the growing risk of manipulated data feeding back into public discourse and highlights the challenge of maintaining data integrity in AI systems. The report also documents the use of AI for malware development and data mining. Criminal groups are reportedly harnessing AI to automate the creation of tailored malware, conduct distributed denial-of-service (DDoS) campaigns, and process stolen credentials. Notably, services like Gabbers Shop are using AI to validate and clean stolen data, boosting its resale value and targeting efficiency on illicit marketplaces. A further area of risk is the weaponisation and hijacking of AI models themselves. Attackers have stolen LLM accounts or constructed custom Dark LLMs, such as FraudGPT and WormGPT. These advanced models allow actors to circumvent standard safety mechanisms and commercialise AI as a tool for hacking and fraud, accessible through darknet platforms. On the defensive side, the report makes it clear that organisations must now presume that AI capabilities are embedded within most adversarial campaigns. This shift in assumption underlines the necessity for a revised approach to cyber defence. Check Point Research outlines several strategies for defending against AI-driven threats. These include using AI-assisted detection and threat hunting to spot synthetic phishing content and deepfakes, and adopting enhanced identity verification techniques that go beyond traditional methods. Organisations are encouraged to implement multi-layered checks encompassing text, voice, and video, recognising that trust in digital identity can no longer be presumed. The report also stresses the importance of integrating AI context into threat intelligence, allowing cyber security teams to better recognise and respond to AI-driven tactics. Lotem Finkelstein added, "In this AI-driven era, cyber security teams need to match the pace of attackers by integrating AI into their defences. This report not only highlights the risks but provides the roadmap for securing AI environments safely and responsibly."

Stop Sleeping On AI: Why Security Teams Should Embrace The Technology
Stop Sleeping On AI: Why Security Teams Should Embrace The Technology

Forbes

time03-04-2025

  • Business
  • Forbes

Stop Sleeping On AI: Why Security Teams Should Embrace The Technology

Ron Williams is the CEO and founder of . getty Artificial intelligence (AI) is no longer a futuristic tool for cybersecurity. It's gone mainstream. Threat actors have integrated AI into their operations with alarming success, using tools like WormGPT, GhostGPT and even legitimate platforms like Google's Gemini AI to scale their attacks. Google's Threat Intelligence Group recently detailed how state-sponsored actors have been abusing Gemini AI to enhance reconnaissance, scripting and privilege escalation. These factors lead to a harsh reality: The asymmetry of power in AI between cybersecurity and bad actors is growing, and security teams are falling behind. If defenders don't start using AI to automate workflows, mitigate threats and improve incident response, they risk being perpetually outpaced by modern attackers. The time to act is now, not after attackers have perfected the use of AI in their operations. ChatGPT democratized consumer AI access, revolutionizing a whole range of industries. However, cybercriminals quickly recognized its potential for malicious usage, and just a year after its launch, discussions on cybercrime networks about exploiting AI exploded, leading to an increase in AI-based attack strategies. Hundreds of thousands of ChatGPT accounts were being bought and sold on underground markets, and by mid-2023, WormGPT, a malicious chatbot designed to enhance business email compromise attacks and spear-phishing campaigns, sent shockwaves through the industry. WormGPT was marketed as an AI tool specifically trained on malicious datasets to improve cybercrime operations, prompting headlines warning of AI-powered cybercrime on the rise. But WormGPT was just the beginning. Variants like FraudGPT, DarkBERT (not to be confused with DarkBART) and GhostGPT followed. Fast-forwarding to today, cybercriminals have found multiple ways to weaponize AI for their operations: • Bypassing ethical constraints: Mainstream AI models like ChatGPT and Claude refuse to generate phishing emails. However, attackers discovered ways to manipulate them into compliance using prompt engineering. • Masquerading legitimate chatbots as malicious chatbots: Some cybercriminals have wrapped jailbroken AI instances within custom interfaces, branding them as their own evil variants and selling access to others. • Training AI models on malicious datasets: Rather than relying on trickery, some groups have trained their own AI models, fine-tuning them with cybercrime-related data to generate more accurate attack strategies. This is essentially how WormGPT and similar tools evolved within months. Why Security Teams Are Hesitant Despite clear evidence of AI's role in advancing cybercrime, many security teams remain hesitant to embrace AI defenses. This reluctance sometimes stems from three key concerns: lack of trust in AI, implementation complexity and job security fears. Lack Of Trust In AI Many cybersecurity professionals view AI as a 'black box' technology and are concerned that it's difficult to predict how AI will behave in a live security environment. Security teams worry that if something goes wrong, they won't be able to remediate the issue due to their lack of understanding of the model's decision-making process. However, while these concerns are valid, they can be addressed. Many AI-based workflows are built on well-documented APIs that offer transparency and allow customization. If security teams take the time to understand how AI-powered tools function in practical applications, much of their skepticism could be alleviated. Implementation Complexity Another major roadblock is the perceived difficulty of integrating AI into legacy security infrastructure. A lot of organizations assume that AI adoption requires a fundamental overhaul of existing systems, which is daunting and expensive. However, security teams can start small by identifying repetitive, time-consuming tasks that AI can automate. Take vulnerability management, for instance. Consultants spend a lot of time triaging vulnerabilities, mapping them to affected assets and prioritizing remediation efforts. AI can optimize this by automatically correlating vulnerabilities with exploitability data, assessing business impact and recommending remediation priorities. A simple exercise to test AI's effectiveness is to take a common, repetitive security task and design an AI-assisted workflow to replace it. Even partial automation can yield a large return on investment in saved time and improved accuracy. Job Displacement Some security professionals fear that widespread AI adoption could automate them out of a job. While discussions about AI replacing analysts entirely are common in the industry, AI should be viewed as an augmentation tool rather than a replacement. The focus should be on promoting this perspective. Organizations that upskill their employees to work alongside AI will develop a stronger, more efficient security team. The bigger point here is that AI won't eliminate security teams—it will empower them. By automating time-consuming and mundane tasks, security analysts can focus on higher-value work, like investigating more complex threats, threat hunting and incident response. How AI Helps Security Teams Whether operating within a security operations center (SOC) or following a more agile approach, all security teams encounter repetitive tasks that can be automated. AI-powered security solutions can assist with this by: Automating repetitive alert investigations, reducing analyst burnout and improving response times. Improving detection capabilities by identifying patterns in large datasets faster than human analysts. Consider a typical security analyst's workflow: They receive an alert, analyze it, extract indicators of compromise, query threat intelligence databases, determine if it's a genuine threat, document the findings and respond accordingly. AI automates much of this process, alleviating manual operational burdens. The benefits of AI and autonomous agents extend beyond the SOC; AI can also improve web application security, agile security in software development lifecycles, penetration testing and threat intelligence gathering. Security teams don't need to overhaul their entire infrastructure overnight. Incremental AI adoption can have immediate benefits. The Cost Of Inaction AI is not a passing trend—it's the present and future of cybersecurity. Attackers are not waiting for defenders to catch up. They are actively refining AI-augmented attack methods, making their operations faster, more scalable and more effective. Security teams must recognize that the only way to counter AI-based cyber threats is to fight fire with fire. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store