logo
#

Latest news with #WormGPT

Hackbots Accelerate Cyber Risk — And How to Beat Them
Hackbots Accelerate Cyber Risk — And How to Beat Them

Arabian Post

time10-06-2025

  • Arabian Post

Hackbots Accelerate Cyber Risk — And How to Beat Them

Security teams globally face mounting pressure as artificial‑intelligence‑driven 'hackbots' emerge as a new front in cyber warfare. These autonomous agents, powered by advanced large language models and automation frameworks, are increasingly capable of probing systems, identifying exploits, and in some instances, launching attacks with minimal human intervention. Experts warn that if left unchecked, hackbots could rapidly outpace traditional scanning tools and elevate the scale of cyber threats. Hackbots combine the intelligence of modern LLMs—most notably GPT‑4—with orchestration layers that enable intelligent decision‑making, adapting test payloads, refining configurations, and parsing results. Unlike legacy scanners, these systems analyse target infrastructure and dynamically choose tools and strategies, often flagging novel vulnerabilities that evade conventional detection. Academic research demonstrates that GPT‑4 agents can autonomously perform complex operations like blind SQL injection and database schema extraction without prior specifications. Corporate platforms have begun integrating hackbot capabilities into ethical hacking pipelines. HackerOne, for instance, now requires human review before any vulnerability submission, underscoring that hackbots remain tools under human supervision. Cybersecurity veteran Jack Nunziato explains: 'hackbots leverage advanced machine learning … to dynamically and intelligently hack applications,' a leap forward from rigid automated scans. Such systems are transforming both offensive and defensive security landscapes. ADVERTISEMENT Alongside legitimate use, underground markets are offering hackbots-as-a-service. Products like WormGPT and FraudGPT are being promoted on darknet forums, providing scripting and social‑engineering automation under subscription models. Though some users criticise their limited utility—one described WormGPT as 'just an old cheap version of ChatGPT'—the consensus is that even basic automation can significantly lower the barrier for entry into cybercrime. Security analysts caution that these services, even if imperfect, democratise attack capabilities and may increase volume and reach of malicious campaigns. While hackbots enable faster and more thorough scans, they lack human creativity. Modern systems depend on human-in-the-loop oversight, where experts validate results and craft exploit chains for end-to-end attacks. Yet the speed advantage is real: automated agents can tirelessly comb through code, execute payloads, and surface anomalies across large environments. One cybersecurity researcher noted hackbots are 'getting good, really good, at simulating … a curious, determined hacker'. Defensive strategies must evolve rapidly to match this new threat. The UK's National Cyber Security Centre has warned that AI will likely increase both the volume and severity of cyberattacks. GreyNoise Intelligence recently reported that actors are increasingly exploiting long-known vulnerabilities in edge devices as defenders lag on patching — demonstrating how automation favours adversaries. Organisations must enhance their baseline defences to withstand hackbots, which operate at machine scale. A multi-layered response is critical. Continuous scanning, hardened endpoint controls, identity‑centric solutions, and robust patch management programmes form the backbone of resilience. Privileged Access Management, especially following frameworks established this year, is being touted as indispensable. Likewise, advanced Endpoint Detection and Response and Extended Detection & Response platforms use AI defensively, applying behavioural analytics to flag suspicious activity before attackers can exploit high-velocity toolkits. Legal and policy frameworks are also adapting. Bug bounty platforms now integrate hackbot disclosures under rules requiring human oversight, promoting ethical use while mitigating abuse. Security regulators and insurers are demanding evidence of AI-aware defences, particularly in critical sectors, aligning with risk-based compliance models. ADVERTISEMENT Industry insiders acknowledge the dual nature of the phenomenon. Hackbots serve as force multipliers for both defenders and attackers. As one expert puts it, 'these tools could reshape how we defend systems, making it easier to test at scale … On the other hand, hackbots can … scale sophisticated attacks faster than any human ever could'. That tension drives the imperative: treat hackbots as exotic scanners failing to catch human logic, but succeed in deploying scalable exploitation. Recent breakthroughs on LLM‑powered exploit automation heighten the stakes. A February 2024 study revealed GPT‑4 agents autonomously discovering SQL vulnerabilities on live websites. With LLMs maturing rapidly, future iterations may craft exploit payloads, bypass filters, and compose stealthier attacks. To pre‑empt this, defenders must embed AI strategies within security operations. Simulated red-team exercises should leverage hackbot‑style agents, exposing defenders to their speed and variety. Build orchestration workflows that monitor, sandbox, and neutralise test feeds. Maintain visibility over AI‑driven tooling across pipelines and supply chains. Ethical AI practices extend beyond tooling. Security teams must ensure any in‑house or third‑party AI system has strict governance. That mandates access control, audit logging, prompt validation, and fallbacks to expert review. In contexts where hackbots are used, quarterly audits should verify compliance with secure‑by‑design frameworks.

AI security report warns of rising deepfakes & Dark LLM threat
AI security report warns of rising deepfakes & Dark LLM threat

Techday NZ

time01-05-2025

  • Techday NZ

AI security report warns of rising deepfakes & Dark LLM threat

Check Point Research has released its inaugural AI Security Report, detailing how artificial intelligence is affecting the cyber threat landscape, from deepfake attacks to generative AI-driven cybercrime and defences. The report explores four main areas where AI is reshaping both offensive and defensive actions in cyber security. According to Check Point Research, one in 80 generative AI prompts poses a high risk of sensitive data leakage, with one in 13 containing potentially sensitive information that could be exploited by threat actors. The study also highlights incidents of AI data poisoning linked to disinformation campaigns, as well as the proliferation of so-called 'Dark LLMs' such as FraudGPT and WormGPT. These large language models are being weaponised for cybercrime, enabling attackers to bypass existing security protocols and carry out malicious activities at scale. Lotem Finkelstein, Director of Check Point Research, commented on the rapid transformation underway, stating, "The swift adoption of AI by cyber criminals is already reshaping the threat landscape. While some underground services have become more advanced, all signs point toward an imminent shift - the rise of digital twins. These aren't just lookalikes or soundalikes, but AI-driven replicas capable of mimicking human thought and behaviour. It's not a distant future - it's just around the corner." The report examines how AI is enabling attackers to impersonate and manipulate digital identities, diminishing the boundary between what is authentic and fake online. The first threat identified is AI-enhanced impersonation and social engineering. Threat actors are now using AI to generate convincing phishing emails, audio impersonations, and deepfake videos. In one case, attackers successfully mimicked Italy's defence minister with AI-generated audio, demonstrating the sophistication of current techniques and the difficulty in verifying online identities. Another prominent risk is large language model (LLM) data poisoning and disinformation. The study refers to an example involving Russia's disinformation network Pravda, where AI chatbots were found to repeat false narratives 33% of the time. This trend underscores the growing risk of manipulated data feeding back into public discourse and highlights the challenge of maintaining data integrity in AI systems. The report also documents the use of AI for malware development and data mining. Criminal groups are reportedly harnessing AI to automate the creation of tailored malware, conduct distributed denial-of-service (DDoS) campaigns, and process stolen credentials. Notably, services like Gabbers Shop are using AI to validate and clean stolen data, boosting its resale value and targeting efficiency on illicit marketplaces. A further area of risk is the weaponisation and hijacking of AI models themselves. Attackers have stolen LLM accounts or constructed custom Dark LLMs, such as FraudGPT and WormGPT. These advanced models allow actors to circumvent standard safety mechanisms and commercialise AI as a tool for hacking and fraud, accessible through darknet platforms. On the defensive side, the report makes it clear that organisations must now presume that AI capabilities are embedded within most adversarial campaigns. This shift in assumption underlines the necessity for a revised approach to cyber defence. Check Point Research outlines several strategies for defending against AI-driven threats. These include using AI-assisted detection and threat hunting to spot synthetic phishing content and deepfakes, and adopting enhanced identity verification techniques that go beyond traditional methods. Organisations are encouraged to implement multi-layered checks encompassing text, voice, and video, recognising that trust in digital identity can no longer be presumed. The report also stresses the importance of integrating AI context into threat intelligence, allowing cyber security teams to better recognise and respond to AI-driven tactics. Lotem Finkelstein added, "In this AI-driven era, cyber security teams need to match the pace of attackers by integrating AI into their defences. This report not only highlights the risks but provides the roadmap for securing AI environments safely and responsibly."

Stop Sleeping On AI: Why Security Teams Should Embrace The Technology
Stop Sleeping On AI: Why Security Teams Should Embrace The Technology

Forbes

time03-04-2025

  • Business
  • Forbes

Stop Sleeping On AI: Why Security Teams Should Embrace The Technology

Ron Williams is the CEO and founder of . getty Artificial intelligence (AI) is no longer a futuristic tool for cybersecurity. It's gone mainstream. Threat actors have integrated AI into their operations with alarming success, using tools like WormGPT, GhostGPT and even legitimate platforms like Google's Gemini AI to scale their attacks. Google's Threat Intelligence Group recently detailed how state-sponsored actors have been abusing Gemini AI to enhance reconnaissance, scripting and privilege escalation. These factors lead to a harsh reality: The asymmetry of power in AI between cybersecurity and bad actors is growing, and security teams are falling behind. If defenders don't start using AI to automate workflows, mitigate threats and improve incident response, they risk being perpetually outpaced by modern attackers. The time to act is now, not after attackers have perfected the use of AI in their operations. ChatGPT democratized consumer AI access, revolutionizing a whole range of industries. However, cybercriminals quickly recognized its potential for malicious usage, and just a year after its launch, discussions on cybercrime networks about exploiting AI exploded, leading to an increase in AI-based attack strategies. Hundreds of thousands of ChatGPT accounts were being bought and sold on underground markets, and by mid-2023, WormGPT, a malicious chatbot designed to enhance business email compromise attacks and spear-phishing campaigns, sent shockwaves through the industry. WormGPT was marketed as an AI tool specifically trained on malicious datasets to improve cybercrime operations, prompting headlines warning of AI-powered cybercrime on the rise. But WormGPT was just the beginning. Variants like FraudGPT, DarkBERT (not to be confused with DarkBART) and GhostGPT followed. Fast-forwarding to today, cybercriminals have found multiple ways to weaponize AI for their operations: • Bypassing ethical constraints: Mainstream AI models like ChatGPT and Claude refuse to generate phishing emails. However, attackers discovered ways to manipulate them into compliance using prompt engineering. • Masquerading legitimate chatbots as malicious chatbots: Some cybercriminals have wrapped jailbroken AI instances within custom interfaces, branding them as their own evil variants and selling access to others. • Training AI models on malicious datasets: Rather than relying on trickery, some groups have trained their own AI models, fine-tuning them with cybercrime-related data to generate more accurate attack strategies. This is essentially how WormGPT and similar tools evolved within months. Why Security Teams Are Hesitant Despite clear evidence of AI's role in advancing cybercrime, many security teams remain hesitant to embrace AI defenses. This reluctance sometimes stems from three key concerns: lack of trust in AI, implementation complexity and job security fears. Lack Of Trust In AI Many cybersecurity professionals view AI as a 'black box' technology and are concerned that it's difficult to predict how AI will behave in a live security environment. Security teams worry that if something goes wrong, they won't be able to remediate the issue due to their lack of understanding of the model's decision-making process. However, while these concerns are valid, they can be addressed. Many AI-based workflows are built on well-documented APIs that offer transparency and allow customization. If security teams take the time to understand how AI-powered tools function in practical applications, much of their skepticism could be alleviated. Implementation Complexity Another major roadblock is the perceived difficulty of integrating AI into legacy security infrastructure. A lot of organizations assume that AI adoption requires a fundamental overhaul of existing systems, which is daunting and expensive. However, security teams can start small by identifying repetitive, time-consuming tasks that AI can automate. Take vulnerability management, for instance. Consultants spend a lot of time triaging vulnerabilities, mapping them to affected assets and prioritizing remediation efforts. AI can optimize this by automatically correlating vulnerabilities with exploitability data, assessing business impact and recommending remediation priorities. A simple exercise to test AI's effectiveness is to take a common, repetitive security task and design an AI-assisted workflow to replace it. Even partial automation can yield a large return on investment in saved time and improved accuracy. Job Displacement Some security professionals fear that widespread AI adoption could automate them out of a job. While discussions about AI replacing analysts entirely are common in the industry, AI should be viewed as an augmentation tool rather than a replacement. The focus should be on promoting this perspective. Organizations that upskill their employees to work alongside AI will develop a stronger, more efficient security team. The bigger point here is that AI won't eliminate security teams—it will empower them. By automating time-consuming and mundane tasks, security analysts can focus on higher-value work, like investigating more complex threats, threat hunting and incident response. How AI Helps Security Teams Whether operating within a security operations center (SOC) or following a more agile approach, all security teams encounter repetitive tasks that can be automated. AI-powered security solutions can assist with this by: Automating repetitive alert investigations, reducing analyst burnout and improving response times. Improving detection capabilities by identifying patterns in large datasets faster than human analysts. Consider a typical security analyst's workflow: They receive an alert, analyze it, extract indicators of compromise, query threat intelligence databases, determine if it's a genuine threat, document the findings and respond accordingly. AI automates much of this process, alleviating manual operational burdens. The benefits of AI and autonomous agents extend beyond the SOC; AI can also improve web application security, agile security in software development lifecycles, penetration testing and threat intelligence gathering. Security teams don't need to overhaul their entire infrastructure overnight. Incremental AI adoption can have immediate benefits. The Cost Of Inaction AI is not a passing trend—it's the present and future of cybersecurity. Attackers are not waiting for defenders to catch up. They are actively refining AI-augmented attack methods, making their operations faster, more scalable and more effective. Security teams must recognize that the only way to counter AI-based cyber threats is to fight fire with fire. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store