4 days ago
A Cat And Mouse Game: Addressing Vibe Coding's Security Challenges
Shahar Man is co-founder and CEO of Backslash Security.
Love it or hate it, AI is writing a lot of our code. In our previous article about AI-generated code's security pitfalls, we noted Anthropic CEO Dario Amodei's prediction that 'AI will write 90% of the code for software engineers within the next three to six months,' an assertion that seems to be bearing out, even within the tech giants.
Within the broader AI-code umbrella, vibe coding is becoming dominant. For the uninitiated: Many of us have used a tool like GitHub Copilot, which plugs into your integrated development environment (IDE). As you're writing code, it generates full blocks based on what you type. That capability has been around for a couple of years.
Vibe coding is even cooler. You can use a chat-based interface to ask for parts of the code you want built, or even a full application. For instance, you can use Loveable, a 'vibe coding' SaaS interface, and type in: 'Build an app called 'Unscramble' that helps people organize their thoughts before writing an article.' You can describe whatever you want and the interface will generate the app for you.
Lovable exists at one end of development, where you can build a full application from scratch. On the other end, you have something like CoPilot, which just supports coding. There's also a middle ground, where things get interesting: Tools like Cursor AI, a $20 plugin for your IDE that gives you a chat interface, are experiencing widespread adoption. You can ask Cursor things like 'implement this function' or 'get this package,' and it does it. It sits in the space between full-on app building and simple code suggestions.
Programs like Cursor are everywhere right now. And that's raising red flags for many enterprise security teams.
As recently as a few months ago, tech organizations were saying, 'We're not using these AI coding programs here.' This has proven an unrealistic stance to maintain; vibe coding is catching on like wildfire across the industry. Still, many organizational leaders will swear up and down that they 'don't allow' Cursor AI in their company—but frankly, they wouldn't know if a developer had paid the $20 for the plugin unless they've been monitoring closely. Most often, no one would even know it's being used; most security teams have no visibility whatsoever into their developers' IDEs. This is a reality that organizations need to be aware of.
Suppose you're an organizational leader reading this. In that case, your first thought may be, 'Oh no, I need to figure out who is using Cursor AI.' (Interestingly, one of the key features of newer AI security platforms is the ability to give you visibility across the organization: who's using these tools, to what extent and whether they're following the right security practices.) But instead of going on a crusade of trying to catch AI-assisted coders red-handed, it's more productive to assume these tools are being used and work within that framework to secure your organization.
The development environment—whether it's Visual Studio Code, IntelliJ or whatever you're using—is more than just a place to write code. It's becoming an entire ecosystem for developers to plug into additional tools and functionalities powered by AI. Even if your developers don't use one of the newfangled 'vibe coding' tools, they might be using other forms of AI assistance.
Enter Model Context Protocol servers (MCPs). In a nutshell, MCPs are a way to extend large language models (LLMs) with specific domain knowledge or capabilities. For example, if ChatGPT doesn't know something about the popular project management tool Jira, it might use a 'Jira MCP' to open a ticket in your system.
There are now tens of thousands of these MCPs; developers are adding them directly into their IDEs. Many have few to no security standards, many are unofficial extensions to official products—and some, no doubt, might be malicious. This introduces a whole new layer of exposure. The IDE is no longer just a code editor—it's a threat vector. In theory, someone could easily build a Jira MCP that opens tickets and silently extracts passwords or sends data elsewhere. It might look innocent, but it's not.
This raises a huge question for organizations: How do we whitelist the right MCPs and block the ones that could pose a risk? The first step, as with any security protocol, is awareness. Don't dismiss vibe coding, or its constituent threats, as a series of passing trends. It's here to stay, and the productivity boost it offers is massive. So, instead of trying to block it, figure out how to embrace it safely.
Beyond that awareness, security typically progresses in five stages:
1. Visibility: Understand what's going on. Who's using MCPs? Which ones? How often?
2. Governance: Create policies and rules. Which MCPs and models are approved? Which aren't? How can they be prompted to create secure code?
3. Automation: Automate those policies so they're seamlessly applied. Developers won't have to ask; the environment just 'knows' what's allowed and acts accordingly.
4. Guidance/Enablement: In the past, security teams might have injected alerts or recommendations into the IDE (like warnings about known vulnerabilities), but they still had no idea what was happening inside that environment. With tools like MCPs extending what IDEs can do, security teams can manage them more actively and vigilantly.
5. Enforcement: Once you have that governance in place—understanding what's being used, what's allowed and what's recommended—you can move toward actually enforcing your policies organization-wide.
We try to look at the security situation optimistically as well as pragmatically: Given the pace of AI and vibe coding advancement, we have an opportunity to get security in place in parallel with the trend itself. Since the code is born in AI, we should fix it with AI as it's born, securing it in real time. That's the real opportunity for the industry today.
We refer to this as 'vibe securing,' and it has three pillars:
• The first is visibility—understanding what IDEs are being used and where, which MCPs are in use, whether they are safe and which LLMs are employed. It's about controlling the IDE environments.
• The second—and most important—is securing the code generated through vibe coding. When developers use these tools "naively," recent research shows that 9 out of 10 times, the code contains vulnerabilities. It's critical to clean up those issues before the application is generated, and there are ways to do this automatically without waiting for LLMs to 'learn' security.
• The third is to empower more security-aware developers (or those who their organizations push to do the right things) by giving them contextual, real-time advice that preempts any mistakes they might end up making.
Is this approach sustainable long-term? Currently, we're at a T-junction. Some would argue that as vibe coding evolves, the security around it will evolve as well. Over time, the models might become more secure and the generated code could be safer—maybe even to the point where you don't need security vendors to fix it post-generation.
However, that ideal scenario never really materializes. Take open source, for example: Even in well-maintained projects where people do care about security, the outcome isn't always secure. Vulnerabilities can arise. More importantly, no one can afford to wait for this ideal scenario—once again, the software development horses are leaving the stable well before security can close the stable doors.
Some might say, 'Won't developers use AI to generate code, and then security teams will just use their own AI to secure it?' That's a bit of a joke. First, it's extremely wasteful—you're spinning up multiple AI engines just to cancel each other out. Second, there's no guarantee it'll work.
It's always better to secure from the start, to preempt rather than react and not to rely on some imaginary AI army to clean up afterward. AI security is a cat-and-mouse game; the key is which one your organization wants to be.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?