4 days ago
AI-generated code raises security risks as governance lags
A new report by Checkmarx highlights a growing trend in the use of AI coding assistants, with significant implications for application security and governance.
The report, which surveyed over 1,500 Chief Information Security Officers (CISOs), application security managers and developers across North America, Europe, and Asia-Pacific, reveals that AI-generated code now constitutes a substantial proportion of software development across organisations worldwide. According to the findings, over half of respondents already use AI coding assistants, and 34% report that more than 60% of their code is generated using such tools.
Despite the rapid adoption of generative AI in coding, the survey found that only 18% of organisations have formal policies in place governing the use of AI coding assistants. This points to a significant gap between technological uptake and the establishment of necessary governance frameworks to manage resulting risks.
Vulnerable code and breach rates
The research also highlights that risky development practices, particularly under business pressure, are becoming increasingly normalised. The report states that 81% of organisations knowingly ship vulnerable code. Furthermore, 98% of organisations surveyed experienced a security breach linked to vulnerable code in the past 12 months. This marks a notable rise compared to 91% reporting breaches for the previous year.
Looking ahead, nearly a third (32%) of respondents expect breaches via APIs, including through shadow APIs or business logic attacks, within the next 12 to 18 months. Despite these heightened risks, the report found that fewer than half of respondents regularly deploy core security tools such as dynamic application security testing (DAST) or infrastructure-as-code scanning.
DevSecOps, although widely discussed in the industry, is not yet universally adopted. The survey revealed that only half of the organisations use essential DevSecOps tools, and the figure in North America stands at just 51%.
AI impacts developer roles and security practices "The velocity of AI‐assisted development means security can no longer be a bolt‐on practice. It has to be embedded from code to cloud," said Eran Kinsbruner, Vice President of Portfolio Marketing. "Our research shows that developers are already letting AI write much of their code, yet most organizations lack governance around these tools. Combine that with the fact that 81% knowingly ship vulnerable code and you have a perfect storm. It's only a matter of time before a crisis is at hand."
The report argues that the use of AI coding assistants is not only expediting software creation but also eroding traditional developer ownership and broadening organisations' attack surfaces.
Checkmarx's report proposes six strategic imperatives aimed at addressing these security challenges: shifting from awareness to action, embedding security from code to cloud, establishing guidance for AI use, operationalising security tools, preparing for agentic AI in security, and developing a culture that empowers developers.
Kinsbruner added: "To stay ahead, organizations must operationalize security tooling that is focused on prevention. They need to establish policies for AI usage and invest in agentic AI that can automatically analyze and fix issues real-time. AI generated code will continue to proliferate; secure software will be the competitive differentiator in the coming years."
Regional perspectives
The report points out regional differences in risk exposure and practices. Chris Ledingham, Director Northern Europe, commented: "Our research found that nearly one third, 32%, of European respondents say their organization often deploys code with known vulnerabilities, compared with 24% of those in North America. This suggests the need for a stronger focus across our region on embedding security into development. With AI now writing much of the code base, security leaders face heightened accountability. Boards and regulators will rightly expect CISOs to implement robust governance for AI generated code and to ensure vulnerable software isn't being pushed to production."
Security tooling
The report's publication coincides with Checkmarx's introduction of its Developer Assist agent, which integrates with AI-native integrated development environments (IDEs) like Windsurf by Cognition, Cursor, and GitHub Copilot. The tool is intended to deliver real-time, context-sensitive security guidance to developers for the prevention of vulnerabilities at the coding stage.
The full report, "Future of Application Security in the Era of AI," covers in further detail the findings on how organisations are managing the evolving risks posed by AI-enabled software development.