
The Rise of ‘Vibe Hacking' Is the Next AI Nightmare
Jun 4, 2025 6:00 AM In the very near future, victory will belong to the savvy blackhat hacker who uses AI to generate code at scale.
In the near future one hacker may be able to unleash 20 zero-day attacks on different systems across the world all at once. Polymorphic malware could rampage across a codebase, using a bespoke generative AI system to rewrite itself as it learns and adapts. Armies of script kiddies could use purpose-built LLMs to unleash a torrent of malicious code at the push of a button.
Case in point: as of this writing, an AI system is sitting at the top of several leaderboards on HackerOne—an enterprise bug bounty system. The AI is XBOW, a system aimed at whitehat pentesters that 'autonomously finds and exploits vulnerabilities in 75 percent of web benchmarks,' according to the company's website.
AI-assisted hackers are a major fear in the cybersecurity industry, even if their potential hasn't quite been realized yet. 'I compare it to being on an emergency landing on an aircraft where it's like 'brace, brace, brace' but we still have yet to impact anything,' Hayden Smith, the cofounder of security company Hunted Labs, tells WIRED. 'We're still waiting to have that mass event.'
Generative AI has made it easier for anyone to code. The LLMs improve every day, new models spit out more efficient code, and companies like Microsoft say they're using AI agents to help write their codebase. Anyone can spit out a Python script using ChatGPT now, and vibe coding—asking an AI to write code for you, even if you don't have much of an idea how to do it yourself—is popular; but there's also vibe hacking.
'We're going to see vibe hacking. And people without previous knowledge or deep knowledge will be able to tell AI what it wants to create and be able to go ahead and get that problem solved, ' Katie Moussouris, the founder and CEO of Luta Security, tells WIRED.
Vibe hacking frontends have existed since 2023. Back then, a purpose-built LLM for generating malicious code called WormGPT spread on Discord groups, Telegram servers, and darknet forums. When security professionals and the media discovered it, its creators pulled the plug.
WormGPT faded away, but other services that billed themselves as blackhat LLMs, like FraudGPT, replaced it. But WormGPT's successors had problems. As security firm Abnormal AI notes, many of these apps may have just been jailbroken versions of ChatGPT with some extra code to make them appear as if they were a stand-alone product.
Better then, if you're a bad actor, to just go to the source. ChatGPT, Gemini, and Claude are easily jailbroken. Most LLMs have guard rails that prevent them from generating malicious code, but there are whole communities online dedicated to bypassing those guardrails. Anthropic even offers a bug bounty to people who discover new ones in Claude.
'It's very important to us that we develop our models safely,' an OpenAI spokesperson tells WIRED. 'We take steps to reduce the risk of malicious use, and we're continually improving safeguards to make our models more robust against exploits like jailbreaks. For example, you can read our research and approach to jailbreaks in the GPT-4.5 system card, or in the OpenAI o3 and o4-mini system card.'
Google did not respond to a request for comment.
In 2023, security researchers at Trend Micro got ChatGPT to generate malicious code by prompting it into the role of a security researcher and pentester. ChatGPT would then happily generate PowerShell scripts based on databases of malicious code.
'You can use it to create malware,' Moussouris says. 'The easiest way to get around those safeguards put in place by the makers of the AI models is to say that you're competing in a capture-the-flag exercise, and it will happily generate malicious code for you.'
Unsophisticated actors like script kiddies are an age-old problem in the world of cybersecurity, and AI may well amplify their profile. 'It lowers the barrier to entry to cybercrime,' Hayley Benedict, a Cyber Intelligence Analyst at RANE, tells WIRED.
But, she says, the real threat may come from established hacking groups who will use AI to further enhance their already fearsome abilities.
'It's the hackers that already have the capabilities and already have these operations,' she says. 'It's being able to drastically scale up these cybercriminal operations, and they can create the malicious code a lot faster.'
Moussouris agrees. 'The acceleration is what is going to make it extremely difficult to control,' she says.
Hunted Labs' Smith also says that the real threat of AI-generated code is in the hands of someone who already knows the code in and out who uses it to scale up an attack. 'When you're working with someone who has deep experience and you combine that with, 'Hey, I can do things a lot faster that otherwise would have taken me a couple days or three days, and now it takes me 30 minutes.' That's a really interesting and dynamic part of the situation,' he says.
According to Smith, an experienced hacker could design a system that defeats multiple security protections and learns as it goes. The malicious bit of code would rewrite its malicious payload as it learns on the fly. 'That would be completely insane and difficult to triage,' he says.
Smith imagines a world where 20 zero-day events all happen at the same time. 'That makes it a little bit more scary,' he says.
Moussouris says that the tools to make that kind of attack a reality exist now. 'They are good enough in the hands of a good enough operator,' she says, but AI is not quite good enough yet for an inexperienced hacker to operate hands-off.
'We're not quite there in terms of AI being able to fully take over the function of a human in offensive security,' she says.
The primal fear that chatbot code sparks is that anyone will be able to do it, but the reality is that a sophisticated actor with deep knowledge of existing code is much more frightening. XBOW may be the closest thing to an autonomous 'AI hacker' that exists in the wild, and it's the creation of a team of more than 20 skilled people whose previous work experience includes GitHub, Microsoft, and a half a dozen assorted security companies.
It also points to another truth. 'The best defense against a bad guy with AI is a good guy with AI,' Benedict says.
For Moussouris, the use of AI by both blackhats and whitehats is just the next evolution of a cybersecurity arms race she's watched unfold over 30 years. 'It went from: 'I'm going to perform this hack manually or create my own custom exploit,' to, 'I'm going to create a tool that anyone can run and perform some of these checks automatically,'' she says.
'AI is just another tool in the toolbox, and those who do know how to steer it appropriately now are going to be the ones that make those vibey frontends that anyone could use.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fox News
30 minutes ago
- Fox News
Louisiana authorities arrest girlfriend of escaped inmate for conspiracy in coordinated jailbreak operation
Louisiana Attorney General Liz Murrill announced the arrest of a former Orleans Parish Sheriff's Office (OPSO) employee who's believed to be the girlfriend of Derrick Groves, who's still on the run. Murrill announced in a press release that 28-year-old Darriana Burton, a former OPSO employee, was arrested on Monday after allegedly helping Derrick Groves with his escape from the facility. Officials believe Burton is the girlfriend of Groves. An arrest warrant for Burton was issued on May 27 on a charge of conspiracy to commit simple escape. She was arrested by the United States Marshalls Service, who located her in New Orleans on Monday. An arrest affidavit states that the woman allegedly coordinated contact between Groves and individuals outside the jail and communicated information relating to the escape. Murrill's office said officials confirmed the couple were in an "again, off-again relationship" for three years, and are believed to currently be dating. Burton worked for the Orleans Parish Sheriff's Office from August 2022 – March 2023. In 2023, according to Murrill. Officials said she was arrested and charged in 2023 for allegedly bringing contraband into a jail and malfeasance, but those charges were refused by the Orleans Parish District Attorney's Office. "We will continue to pursue anyone and everyone who has aided and abetted these criminals. We will find you, arrest you, and prosecute you to the full extent of the law. I'd like to thank the US Marshals Service for executing our warrant for this woman. We will arrest all aiders and abettors, and we will eventually get Antoine Massey and Derrick Groves back to prison where they belong," Murrill said. Groves and Antoine Massey remain at large.


Bloomberg
34 minutes ago
- Bloomberg
S&P 500 Ekes Out Gain as US-China Talks to Resume
Bloomberg Television brings you the latest news and analysis leading up to the final minutes and seconds before and after the closing bell on Wall Street. Today's guests are Ed Ludlow, Bloomberg News, Ross Gerber, Gerber Kawasaki, Daniel Flax, Neuberger Berman, Tony Wang, T. Rowe Price, Chris Palmeri, Bloomberg News, Sinead Colton Grant, BNY Wealth, Angelo Zino, CFRA Research, Justin Bibb, Cleveland Mayor & Leonardo Williams, Durham Mayor, Alicia Caldwell, Bloomberg News, Gregory Valliere, AGF Investments, Pooja Sriram, Barclays. (Source: Bloomberg)


Digital Trends
35 minutes ago
- Digital Trends
Why macOS Tahoe is a big deal for Intel Macs
Apple's WWDC event kicked off on Monday with the usual slew of fresh announcements and updates showcasing the company's software plans for the year ahead. And as with every WWDC keynote, the upcoming shift to new software also signaled diminishing support for older Apple devices. Recommended Videos Case in point, Apple's upcoming macOS 26 (aka macOS Tahoe) will be the final macOS release for Mac computers powered by Intel processors. The tech giant ditched Intel chips in favor of its own custom-designed Apple Silicon chips, starting with the M1 in 2020 before completing the transition in 2023. Matthew Firlik, Apple's senior director of developer relations, dropped the news during the WWDC Platforms State of the Union keynote on Monday. Tahoe will actually be compatible with only four Intel Macs, which came out in 2019 and 2020, and the updated operating system will not run on Intel versions of the MacBook Air and Mac mini. Specifically, macOS Tahoe will be compatible with: – MacBook Air with Apple Silicon (2020 and later) – MacBook Pro with Apple Silicon (2020 and later) – MacBook Pro (16‑inch, 2019) – MacBook Pro (13‑inch, 2020; four Thunderbolt 3 ports) – iMac (2020 and later) – Mac mini (2020 and later) – Mac Studio (2022 and later) – Mac Pro (2019 and later) Furthermore, there are Mac machines that support macOS Sequoia (the current macOS version) but won't be able to get macOS Tahoe, specifically: – MacBook Air (Retina, 13-inch, 2020) – MacBook Pro (13-inch, 2018; four Thunderbolt 3 ports) – MacBook Pro (15-inch, 2018) – MacBook Pro (13-inch, 2019; four Thunderbolt 3 ports) – MacBook Pro (15-inch, 2019) – MacBook Pro (13-inch, 2019; two Thunderbolt 3 ports) – MacBook Pro (13-inch, 2020; two Thunderbolt 3 ports) – iMac (Retina 5K, 27-inch, 2019) – iMac (Retina 4K, 21.5-inch, 2019) – iMac Pro (2017) – Mac mini (2018) Owners of Intel Macs can still expect to get security updates for their machines for some time to come, but access to new features in macOS 27 — set to launch next year — won't be possible as compatibility will be limited to Apple Silicon devices. Tahoe, which lands for the latest Mac computers this fall, introduces a major redesign featuring a new Liquid Glass translucent aesthetic; enhanced customization options for folders, app icons, and Control Center; a new Phone app for Mac with iPhone call integration; a major update to Spotlight; expanded Apple Intelligence capabilities including Live Translation and intelligent Shortcuts; and improved Continuity experiences to boost productivity and personalization across Mac and iPhone devices.