logo
#

Latest news with #EranKinsbruner

AI-generated code raises security risks as governance lags
AI-generated code raises security risks as governance lags

Techday NZ

time21 hours ago

  • Business
  • Techday NZ

AI-generated code raises security risks as governance lags

A new report by Checkmarx highlights a growing trend in the use of AI coding assistants, with significant implications for application security and governance. The report, which surveyed over 1,500 Chief Information Security Officers (CISOs), application security managers and developers across North America, Europe, and Asia-Pacific, reveals that AI-generated code now constitutes a substantial proportion of software development across organisations worldwide. According to the findings, over half of respondents already use AI coding assistants, and 34% report that more than 60% of their code is generated using such tools. Despite the rapid adoption of generative AI in coding, the survey found that only 18% of organisations have formal policies in place governing the use of AI coding assistants. This points to a significant gap between technological uptake and the establishment of necessary governance frameworks to manage resulting risks. Vulnerable code and breach rates The research also highlights that risky development practices, particularly under business pressure, are becoming increasingly normalised. The report states that 81% of organisations knowingly ship vulnerable code. Furthermore, 98% of organisations surveyed experienced a security breach linked to vulnerable code in the past 12 months. This marks a notable rise compared to 91% reporting breaches for the previous year. Looking ahead, nearly a third (32%) of respondents expect breaches via APIs, including through shadow APIs or business logic attacks, within the next 12 to 18 months. Despite these heightened risks, the report found that fewer than half of respondents regularly deploy core security tools such as dynamic application security testing (DAST) or infrastructure-as-code scanning. DevSecOps, although widely discussed in the industry, is not yet universally adopted. The survey revealed that only half of the organisations use essential DevSecOps tools, and the figure in North America stands at just 51%. AI impacts developer roles and security practices "The velocity of AI‐assisted development means security can no longer be a bolt‐on practice. It has to be embedded from code to cloud," said Eran Kinsbruner, Vice President of Portfolio Marketing. "Our research shows that developers are already letting AI write much of their code, yet most organizations lack governance around these tools. Combine that with the fact that 81% knowingly ship vulnerable code and you have a perfect storm. It's only a matter of time before a crisis is at hand." The report argues that the use of AI coding assistants is not only expediting software creation but also eroding traditional developer ownership and broadening organisations' attack surfaces. Checkmarx's report proposes six strategic imperatives aimed at addressing these security challenges: shifting from awareness to action, embedding security from code to cloud, establishing guidance for AI use, operationalising security tools, preparing for agentic AI in security, and developing a culture that empowers developers. Kinsbruner added: "To stay ahead, organizations must operationalize security tooling that is focused on prevention. They need to establish policies for AI usage and invest in agentic AI that can automatically analyze and fix issues real-time. AI generated code will continue to proliferate; secure software will be the competitive differentiator in the coming years." Regional perspectives The report points out regional differences in risk exposure and practices. Chris Ledingham, Director Northern Europe, commented: "Our research found that nearly one third, 32%, of European respondents say their organization often deploys code with known vulnerabilities, compared with 24% of those in North America. This suggests the need for a stronger focus across our region on embedding security into development. With AI now writing much of the code base, security leaders face heightened accountability. Boards and regulators will rightly expect CISOs to implement robust governance for AI generated code and to ensure vulnerable software isn't being pushed to production." Security tooling The report's publication coincides with Checkmarx's introduction of its Developer Assist agent, which integrates with AI-native integrated development environments (IDEs) like Windsurf by Cognition, Cursor, and GitHub Copilot. The tool is intended to deliver real-time, context-sensitive security guidance to developers for the prevention of vulnerabilities at the coding stage. The full report, "Future of Application Security in the Era of AI," covers in further detail the findings on how organisations are managing the evolving risks posed by AI-enabled software development.

AI-Coding Becomes a Risky Norm as Use of AI-Coding Assistants Takes Off and More Than 80% of Organizations Ship Vulnerable Code
AI-Coding Becomes a Risky Norm as Use of AI-Coding Assistants Takes Off and More Than 80% of Organizations Ship Vulnerable Code

Business Wire

timea day ago

  • Business
  • Business Wire

AI-Coding Becomes a Risky Norm as Use of AI-Coding Assistants Takes Off and More Than 80% of Organizations Ship Vulnerable Code

BUSINESS WIRE)--Checkmarx, the leader in agentic AI-powered application security, today released the results of its annual survey titled 'Future of Application Security in the Era of AI,' offering a candid assessment of how AI‑accelerated development is reshaping the risk landscape and how to prepare for the year ahead. The study surveyed more than 1,500 CISOs, AppSec managers and developers across North America, Europe and Asia‑Pacific to understand how organizations are adapting to a world where software is increasingly written by machines. Risky business: Global survey of tech and security leaders says only 18% of organizations have policies governing AI use, and 81% knowingly ship vulnerable code, up from 91% in 2024. The findings paint a stark picture: AI‑generated code is becoming mainstream, but governance is lagging. Half of respondents already use AI security code assistants and 34% admit that more than 60% of their code is AI‑generated. Yet only 18% have policies governing this use. The growing adoption of AI coding assistants is eroding developer ownership and expanding the attack surface. The research also shows that business pressure is normalizing risky practices. Eighty‑one percent of organizations knowingly ship vulnerable code, and 98% experienced a breach stemming from vulnerable code in the past year, that's a sharp rise from 91 % in 2024. Within the next 12 to 18 months, nearly a third (32%) of respondents expect Application Programming Interface (API) breaches via shadow APIs or business logic attacks. Despite these realities, fewer than half of the respondents report deploying foundational security tools, such as using mature application security tools such as dynamic application security testing (DAST) or infrastructure‑as‑code scanning. While DevSecOps is widely discussed industry-wide, only half of organizations surveyed actively use core tools and just 51% of North American organizations report adopting DevSecOps. 'The velocity of AI‑assisted development means security can no longer be a bolt‑on practice. It has to be embedded from code to cloud,' said Eran Kinsbruner, vice president of portfolio marketing. 'Our research shows that developers are already letting AI write much of their code, yet most organizations lack governance around these tools. Combine that with the fact that 81% knowingly ship vulnerable code and you have a perfect storm. It's only a matter of time before a crisis is at hand.' The report outlines six strategic imperatives for closing the application security readiness gap: move from awareness to action, embed 'code‑to‑cloud' security, govern AI use in development, operationalize security tools, prepare for agentic AI in AppSec, and cultivate a culture of developer empowerment. Kinsbruner added, 'To stay ahead, organizations must operationalize security tooling that is focused on prevention. They need to establish policies for AI usage and invest in agentic AI that can automatically analyze and fix issues real-time. AI generated code will continue to proliferate; secure software will be the competitive differentiator in the coming years.' The release of this report follows Checkmarx's announcement of general availability of its Developer Assist agent, with extensions to top AI-native Integrated Development Environments (IDE) including Windsurf by Cognition, Cursor, and GitHub Copilot. This new agent—the first in a family of agentic-AI tools to enhance security for developers, AppSec leaders, and CISO's alike—delivers real-time, context-aware issue identification and guidance to developers as they code for autonomous prevention. Download the full 'Future of Application Security in the Era of AI' report at Checkmarx website to learn how organizations can navigate the AI‑accelerated risk landscape and build secure‑by‑default development practices. About Checkmarx Checkmarx is the leader in agentic AI, cloud-native application security that empowers the world's largest development organizations with real-time scanning and closed-loop remediation to boost developer productivity on security tasks by up to 50%. Based on the powerful Checkmarx One platform that scans over six trillion lines of code each year, Checkmarx is designed for large-scale, hybrid human and AI-assisted development teams. Checkmarx. Always Ready to Run. Follow Checkmarx on LinkedIn, YouTube, and X.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store