logo
#

Latest news with #Veracode

Exclusive: Corridor raises $5.4M, hires Alex Stamos as security leader
Exclusive: Corridor raises $5.4M, hires Alex Stamos as security leader

Axios

time05-08-2025

  • Business
  • Axios

Exclusive: Corridor raises $5.4M, hires Alex Stamos as security leader

Corridor, an AI security startup led by two former CISA employees, has raised $5.4 million and hired longtime security heavyweight Alex Stamos as its chief security officer. Why it matters: Stamos — currently the CSO at SentinelOne and an adjunct professor at Stanford — is a prominent figure across both the cybersecurity industry and the broader tech ecosystem. His decision to join full-time signals the growing urgency of securing AI-generated code — and marks a key endorsement for the startup, co-founded by Jack Cable and Ashwin Ramaswami, in a rapidly crowding field of AI-native security companies. Driving the news: The $5.4 million seed round was led by AI-focused venture firm Conviction. Notable angel investors include Stamos, Bugcrowd founder Casey Ellis and Duo Security co-founder Jon Oberheide. Corridor already counts buzzy AI coding startup Cursor, fintech company Mercury and threat intelligence firm Grey Noise Intelligence as customers. Zoom in: Corridor uses AI to automatically discover software vulnerabilities and triage bug bounty reports — including identifying context-heavy issues like authorization flaws that traditional tools often miss. The big picture: AI has democratized who can write code — but those codebases are often riddled with security flaws that newbie coders can't detect. Nearly half of the programming tasks completed by AI models in a recent Veracode study resulted in code with known security vulnerabilities, the company reported last week in a test of more than 100 large language models. "If security teams are already struggling today, they're certainly going to struggle as engineers are using AI to write code 5-10 times faster," Cable told Axios. Catch up quick: Stamos first met Cable and Ramaswami — both of whom are in their mid-20s — while they were students at Stanford. "I meet a lot of really smart students at Stanford, but very few of them are as dedicated to security as these two were," Stamos told Axios. Cable, Corridor's CEO, started bug hunting in high school and eventually ranked among the top 100 hackers on HackerOne. He later led the Secure by Design initiative at CISA, which pushed software vendors to bake in security from the start. Sixty-eight companies signed a pledge under that effort last year. Ramaswami, Corridor's CTO, previously worked alongside Cable at CISA and last year ran a high-profile campaign for Georgia's state Senate — making a name for himself in both tech and politics despite losing. Between the lines: Stamos says AI is driving a wave of transformation unlike anything he's seen in his 25 years in the field — and creating an enormous gap between how code is written and how it's secured. "These people have no idea how the software works," Stamos added. "And so it is completely impossible for them to understand then how it can be broken." What to watch: Corridor is building tools that act as "an assistant across every stage of the product security lifecycle," Cable said. The team plans to use the seed round to hire more engineers. It currently has five employees.

Read This Before You Trust Any AI-Written Code
Read This Before You Trust Any AI-Written Code

Gizmodo

time31-07-2025

  • Gizmodo

Read This Before You Trust Any AI-Written Code

We are in the era of vibe coding, allowing artificial intelligence models to generate code based on a developer's prompt. Unfortunately, under the hood, the vibes are bad. According to a recent report published by data security firm Veracode, about half of all AI-generated code contains security flaws. Veracode tasked over 100 different large language models with completing 80 separate coding tasks, from using different coding languages to building different types of applications. Per the report, each task had known potential vulnerabilities, meaning the models could potentially complete each challenge in a secure or insecure way. The results were not exactly inspiring if security is your top priority, with just 55% of tasks completed ultimately generating 'secure' code. Now, it'd be one thing if those vulnerabilities were little flaws that could easily be patched or mitigated. But they're often pretty major holes. The 45% of code that failed the security check produced a vulnerability that was part of the Open Worldwide Application Security Project's top 10 security vulnerabilities—issues like broken access control, cryptographic failures, and data integrity failures. Basically, the output has big enough issues that you wouldn't want to just spin it up and push it live, unless you're looking to get hacked. Perhaps the most interesting finding of the study, though, is not simply that AI models are regularly producing insecure code. It's that the models don't seem to be getting any better. While syntax has significantly improved over the last two years, with LLMs producing compilable code nearly all the time now, the security of said code has basically remained flat the whole time. Even newer and larger models are failing to generate significantly more secure code. The fact that the baseline of secure output for AI-generated code isn't improving is a problem, because the use of AI in programming is getting more popular, and the surface area for attack is increasing. Earlier this month, 404 Media reported on how a hacker managed to get Amazon's AI coding agent to delete the files of computers that it was used on by injecting malicious code with hidden instructions into the GitHub repository for the tool. Meanwhile, as AI agents become more common, so do agents capable of cracking the very same code. Recent research out of the University of California, Berkeley, found that AI models are getting very good at identifying exploitable bugs in code. So AI models are consistently generating insecure code, and other AI models are getting really good at spotting those vulnerabilities and exploiting them. That's all probably fine.

How A Clash Of Cultures Changed Software Security Forever
How A Clash Of Cultures Changed Software Security Forever

Forbes

time31-07-2025

  • Forbes

How A Clash Of Cultures Changed Software Security Forever

Chris Wysopal is Founder and Chief Security Evangelist at Veracode. In 1998, I found myself in an unexpected place: testifying before the U.S. Senate about computer security alongside my fellow L0pht members. We weren't executives or policymakers—we were hackers. But our message was clear: something had to change. Software was being shipped with critical vulnerabilities, and no one was being held accountable. We got to the Senate floor because we made noise. We did full disclosure. We forced uncomfortable conversations. We weren't seeking notoriety; we were advocating for a safer digital world. Back then, responsible disclosure was ad hoc and adversarial. The tools we built and the research we published were often seen as threats rather than contributions. But we believed that exposing systemic flaws was the only way to compel progress. That mindset of transparency as a driver of accountability feels more relevant than ever. Today's threat landscape is shaped by AI, automation and hyperconnectivity. Just as we once exposed buffer overflows and insecure protocols, today's researchers are surfacing flaws in machine learning models, hallucinated code and autonomous agents. The same principle applies: visibility must precede security. You can't fix what you can't see. Leaders need to prepare for vulnerability discovery at machine speed. Create pathways to disclose flaws uncovered by AI systems, whether in third-party code or your own models. Build red-teaming capabilities for your AI stack, and design systems that reward (not resist) the signals surfaced by independent researchers. At first, L0pht operated outside the system because the system wouldn't listen. But over time, things changed. We sat down with Microsoft in the late 1990s to explain our intent. We weren't trying to embarrass anyone. We just believed users deserved to know when protocols were insecure. That conversation led to coordinated disclosure policies and, later, acknowledgment of researchers in vendor advisories. The lesson we learned—that collaboration beats confrontation—should guide leaders today. Security isn't just a technical function; it's a human one. And culture determines whether people share what they know. CISOs should create internal equivalents of coordinated disclosure. Your engineers, product managers and legal teams must feel empowered to raise issues, even when they're inconvenient. Normalize the flow of uncomfortable truths. Adopt a blameless disclosure culture. And externally, build partnerships with the open-source community, independent researchers and other vendors that make collaboration frictionless and high-trust. Our philosophy at L0pht was 'hack everything.' The goal was never just to break things, but to understand them. Security, to us, wasn't about checking boxes. It was about gaining a deeper grasp of how systems worked so we could make them safer. That approach shaped the work we did when we joined @stake in 2000 and, later, consulted with Microsoft to help secure products such as Internet Explorer 6. Our team introduced methodologies like threat modeling, fuzzing and runtime attack surface analysis that became foundational to Microsoft's Security Development Lifecycle. Today, the pressure to move fast is orders of magnitude greater than it was back in our L0pht days. Leaders are constantly balancing innovation with compliance and risk mitigation, but the real opportunity lies in embedding security into the innovation process itself. Partner with engineering early in the development cycle. Build threat modeling into product design. View security not as a bottleneck but as a catalyst for better code and more resilient systems. The faster you move, the earlier security needs to be involved, because it's far more expensive and disruptive to fix things after the fact. At its core, L0pht wasn't just a lab or a company. It was a culture. We shared tools, ideas and research openly because we believed in democratizing knowledge. That spirit helped seed today's bug bounty programs, open-source security tooling and responsible disclosure norms. As AI reshapes development, security and infrastructure, leaders need to cultivate a similar culture of curiosity and principled dissent. Hire for grit and creativity, not just credentials. Promote the quiet truth-tellers. Build psychological safety so people feel safe flagging issues even when it's politically risky. Security today isn't just about firewalls and encryption; it's about culture. And the most resilient organizations are the ones where people feel empowered to speak up, challenge assumptions and think like attackers, because they want to protect what matters. It's easy to forget how radical it once was for a vendor to listen to a hacker. But that's the shift we helped drive in the early 2000s: from antagonism to collaboration—from underground to boardroom. Today, security researchers have a seat at the table, but the lessons of the past still apply. Vulnerabilities don't get fixed because we wish them away. They get fixed because someone insists that they can't be ignored. That insistence, combined with collaboration, transparency and a willingness to embrace uncomfortable truths, is what made the difference then. It's what still makes the difference now. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks, Veracode Research Reveals
AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks, Veracode Research Reveals

Yahoo

time30-07-2025

  • Business
  • Yahoo

AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks, Veracode Research Reveals

Comprehensive Analysis of More Than 100 Large Language Models Exposes Security Gaps: Java Emerges as Highest-Risk Programming Language, While AI Misses 86% of Cross-Site Scripting Threats BURLINGTON, Mass., July 30, 2025--(BUSINESS WIRE)--Veracode, a global leader in application risk management, today unveiled its 2025 GenAI Code Security Report, revealing critical security flaws in AI-generated code. The study analyzed 80 curated coding tasks across more than 100 large language models (LLMs), revealing that while AI produces functional code, it introduces security vulnerabilities in 45 percent of cases. The research demonstrates a troubling pattern: when given a choice between a secure and insecure method to write code, GenAI models chose the insecure option 45 percent of the time. Perhaps more concerning, Veracode's research also uncovered a critical trend: despite advances in LLMs' ability to generate syntactically correct code, security performance has not kept up, remaining unchanged over time. "The rise of vibe coding, where developers rely on AI to generate code, typically without explicitly defining security requirements, represents a fundamental shift in how software is built," said Jens Wessling, Chief Technology Officer at Veracode. "The main concern with this trend is that they do not need to specify security constraints to get the code they want, effectively leaving secure coding decisions to LLMs. Our research reveals GenAI models make the wrong choices nearly half the time, and it's not improving." AI is enabling attackers to identify and exploit security vulnerabilities quicker and more effectively. Tools powered by AI can scan systems at scale, identify weaknesses, and even generate exploit code with minimal human input. This lowers the barrier to entry for less-skilled attackers and increases the speed and sophistication of attacks, posing a significant threat to traditional security defenses. Not only are vulnerabilities increasing, but the ability to exploit them is becoming easier. LLMs Introduce Dangerous Levels of Common Security Vulnerabilities To evaluate the security properties of LLM-generated code, Veracode designed a set of 80 code completion tasks with known potential for security vulnerabilities according to the MITRE Common Weakness Enumeration (CWE) system, a standard classification of software weaknesses that can turn into vulnerabilities. The tasks prompted more than 100 LLMs to auto-complete a block of code in a secure or insecure manner, which the research team then analyzed using Veracode Static Analysis. In 45 percent of all test cases, LLMs introduced vulnerabilities classified within the OWASP (Open Web Application Security Project) Top 10—the most critical web application security risks. Veracode found Java to be the riskiest language for AI code generation, with a security failure rate over 70 percent. Other major languages, like Python, C#, and JavaScript, still presented significant risk, with failure rates between 38 percent and 45 percent. The research also revealed LLMs failed to secure code against cross-site scripting (CWE-80) and log injection (CWE-117) in 86 percent and 88 percent of cases, respectively. "Despite the advances in AI-assisted development, it is clear security hasn't kept pace," Wessling said. "Our research shows models are getting better at coding accurately but are not improving at security. We also found larger models do not perform significantly better than smaller models, suggesting this is a systemic issue rather than an LLM scaling problem." Managing Application Risks in the AI Era While GenAI development practices like vibe coding accelerate productivity, they also amplify risks. Veracode emphasizes that organizations need a comprehensive risk management program that prevents vulnerabilities before they reach production—by integrating code quality checks and automated fixes directly into the development workflow. As organizations increasingly leverage AI-powered development, Veracode recommends taking the following proactive measures to ensure security: Integrate AI-powered tools like Veracode Fix into developer workflows to remediate security risks in real time. Leverage Static Analysis to detect flaws early and automatically, preventing vulnerable code from advancing through development pipelines. Embed security in agentic workflows to automate policy compliance and ensure AI agents enforce secure coding standards. Use Software Composition Analysis (SCA) to ensure AI-generated code does not introduce vulnerabilities from third-party dependencies and open-source components. Adopt bespoke AI-driven remediation guidance to empower developers with precise fix instructions and train them to use the recommendations effectively. Deploy a Package Firewall to automatically detect and block malicious packages, vulnerabilities, and policy violations. "AI coding assistants and agentic workflows represent the future of software development, and they will continue to evolve at a rapid pace," Wessling concluded. "The challenge facing every organization is ensuring security evolves alongside these new capabilities. Security cannot be an afterthought if we want to prevent the accumulation of massive security debt." The complete 2025 GenAI Code Security Report is available to download on the Veracode website. About Veracode Veracode is a global leader in Application Risk Management for the AI era. Powered by trillions of lines of code scans and a proprietary AI-assisted remediation engine, the Veracode platform is trusted by organizations worldwide to build and maintain secure software from code creation to cloud deployment. Thousands of the world's leading development and security teams use Veracode every second of every day to get accurate, actionable visibility of exploitable risk, achieve real-time vulnerability remediation, and reduce their security debt at scale. Veracode is a multi-award-winning company offering capabilities to secure the entire software development life cycle, including Veracode Fix, Static Analysis, Dynamic Analysis, Software Composition Analysis, Container Security, Application Security Posture Management, Malicious Package Detection, and Penetration Testing. Learn more at on the Veracode blog, and on LinkedIn and X. Copyright © 2025 Veracode, Inc. All rights reserved. Veracode is a registered trademark of Veracode, Inc. in the United States and may be registered in certain other jurisdictions. All other product names, brands or logos belong to their respective holders. All other trademarks cited herein are property of their respective owners. View source version on Contacts Press and Media: Katy GwilliamHead of Global Communications, Veracodekgwilliam@

AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks, Veracode Research Reveals
AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks, Veracode Research Reveals

Business Wire

time30-07-2025

  • Business Wire

AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks, Veracode Research Reveals

BURLINGTON, Mass.--(BUSINESS WIRE)-- Veracode, a global leader in application risk management, today unveiled its 2025 GenAI Code Security Report, revealing critical security flaws in AI-generated code. The study analyzed 80 curated coding tasks across more than 100 large language models (LLMs), revealing that while AI produces functional code, it introduces security vulnerabilities in 45 percent of cases. Despite advances in AI-assisted development, it is clear security hasn't kept pace. Veracode's latest research reveals GenAI models can make the wrong choices, introducing security vulnerabilities into software nearly half the time. Share The research demonstrates a troubling pattern: when given a choice between a secure and insecure method to write code, GenAI models chose the insecure option 45 percent of the time. Perhaps more concerning, Veracode's research also uncovered a critical trend: despite advances in LLMs' ability to generate syntactically correct code, security performance has not kept up, remaining unchanged over time. 'The rise of vibe coding, where developers rely on AI to generate code, typically without explicitly defining security requirements, represents a fundamental shift in how software is built,' said Jens Wessling, Chief Technology Officer at Veracode. 'The main concern with this trend is that they do not need to specify security constraints to get the code they want, effectively leaving secure coding decisions to LLMs. Our research reveals GenAI models make the wrong choices nearly half the time, and it's not improving.' AI is enabling attackers to identify and exploit security vulnerabilities quicker and more effectively. Tools powered by AI can scan systems at scale, identify weaknesses, and even generate exploit code with minimal human input. This lowers the barrier to entry for less-skilled attackers and increases the speed and sophistication of attacks, posing a significant threat to traditional security defenses. Not only are vulnerabilities increasing, but the ability to exploit them is becoming easier. LLMs Introduce Dangerous Levels of Common Security Vulnerabilities To evaluate the security properties of LLM-generated code, Veracode designed a set of 80 code completion tasks with known potential for security vulnerabilities according to the MITRE Common Weakness Enumeration (CWE) system, a standard classification of software weaknesses that can turn into vulnerabilities. The tasks prompted more than 100 LLMs to auto-complete a block of code in a secure or insecure manner, which the research team then analyzed using Veracode Static Analysis. In 45 percent of all test cases, LLMs introduced vulnerabilities classified within the OWASP (Open Web Application Security Project) Top 10—the most critical web application security risks. Veracode found Java to be the riskiest language for AI code generation, with a security failure rate over 70 percent. Other major languages, like Python, C#, and JavaScript, still presented significant risk, with failure rates between 38 percent and 45 percent. The research also revealed LLMs failed to secure code against cross-site scripting (CWE-80) and log injection (CWE-117) in 86 percent and 88 percent of cases, respectively. 'Despite the advances in AI-assisted development, it is clear security hasn't kept pace,' Wessling said. 'Our research shows models are getting better at coding accurately but are not improving at security. We also found larger models do not perform significantly better than smaller models, suggesting this is a systemic issue rather than an LLM scaling problem.' Managing Application Risks in the AI Era While GenAI development practices like vibe coding accelerate productivity, they also amplify risks. Veracode emphasizes that organizations need a comprehensive risk management program that prevents vulnerabilities before they reach production—by integrating code quality checks and automated fixes directly into the development workflow. As organizations increasingly leverage AI-powered development, Veracode recommends taking the following proactive measures to ensure security: Integrate AI-powered tools like Veracode Fix into developer workflows to remediate security risks in real time. Leverage Static Analysis to detect flaws early and automatically, preventing vulnerable code from advancing through development pipelines. Embed security in agentic workflows to automate policy compliance and ensure AI agents enforce secure coding standards. Use Software Composition Analysis (SCA) to ensure AI-generated code does not introduce vulnerabilities from third-party dependencies and open-source components. Adopt bespoke AI-driven remediation guidance to empower developers with precise fix instructions and train them to use the recommendations effectively. Deploy a Package Firewall to automatically detect and block malicious packages, vulnerabilities, and policy violations. 'AI coding assistants and agentic workflows represent the future of software development, and they will continue to evolve at a rapid pace,' Wessling concluded. 'The challenge facing every organization is ensuring security evolves alongside these new capabilities. Security cannot be an afterthought if we want to prevent the accumulation of massive security debt.' The complete 2025 GenAI Code Security Report is available to download on the Veracode website. About Veracode Veracode is a global leader in Application Risk Management for the AI era. Powered by trillions of lines of code scans and a proprietary AI-assisted remediation engine, the Veracode platform is trusted by organizations worldwide to build and maintain secure software from code creation to cloud deployment. Thousands of the world's leading development and security teams use Veracode every second of every day to get accurate, actionable visibility of exploitable risk, achieve real-time vulnerability remediation, and reduce their security debt at scale. Veracode is a multi-award-winning company offering capabilities to secure the entire software development life cycle, including Veracode Fix, Static Analysis, Dynamic Analysis, Software Composition Analysis, Container Security, Application Security Posture Management, Malicious Package Detection, and Penetration Testing. Learn more at on the Veracode blog, and on LinkedIn and X. Copyright © 2025 Veracode, Inc. All rights reserved. Veracode is a registered trademark of Veracode, Inc. in the United States and may be registered in certain other jurisdictions. All other product names, brands or logos belong to their respective holders. All other trademarks cited herein are property of their respective owners.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store