logo
#

Latest news with #CodeSecurity

AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks, Veracode Research Reveals
AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks, Veracode Research Reveals

Yahoo

time30-07-2025

  • Business
  • Yahoo

AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks, Veracode Research Reveals

Comprehensive Analysis of More Than 100 Large Language Models Exposes Security Gaps: Java Emerges as Highest-Risk Programming Language, While AI Misses 86% of Cross-Site Scripting Threats BURLINGTON, Mass., July 30, 2025--(BUSINESS WIRE)--Veracode, a global leader in application risk management, today unveiled its 2025 GenAI Code Security Report, revealing critical security flaws in AI-generated code. The study analyzed 80 curated coding tasks across more than 100 large language models (LLMs), revealing that while AI produces functional code, it introduces security vulnerabilities in 45 percent of cases. The research demonstrates a troubling pattern: when given a choice between a secure and insecure method to write code, GenAI models chose the insecure option 45 percent of the time. Perhaps more concerning, Veracode's research also uncovered a critical trend: despite advances in LLMs' ability to generate syntactically correct code, security performance has not kept up, remaining unchanged over time. "The rise of vibe coding, where developers rely on AI to generate code, typically without explicitly defining security requirements, represents a fundamental shift in how software is built," said Jens Wessling, Chief Technology Officer at Veracode. "The main concern with this trend is that they do not need to specify security constraints to get the code they want, effectively leaving secure coding decisions to LLMs. Our research reveals GenAI models make the wrong choices nearly half the time, and it's not improving." AI is enabling attackers to identify and exploit security vulnerabilities quicker and more effectively. Tools powered by AI can scan systems at scale, identify weaknesses, and even generate exploit code with minimal human input. This lowers the barrier to entry for less-skilled attackers and increases the speed and sophistication of attacks, posing a significant threat to traditional security defenses. Not only are vulnerabilities increasing, but the ability to exploit them is becoming easier. LLMs Introduce Dangerous Levels of Common Security Vulnerabilities To evaluate the security properties of LLM-generated code, Veracode designed a set of 80 code completion tasks with known potential for security vulnerabilities according to the MITRE Common Weakness Enumeration (CWE) system, a standard classification of software weaknesses that can turn into vulnerabilities. The tasks prompted more than 100 LLMs to auto-complete a block of code in a secure or insecure manner, which the research team then analyzed using Veracode Static Analysis. In 45 percent of all test cases, LLMs introduced vulnerabilities classified within the OWASP (Open Web Application Security Project) Top 10—the most critical web application security risks. Veracode found Java to be the riskiest language for AI code generation, with a security failure rate over 70 percent. Other major languages, like Python, C#, and JavaScript, still presented significant risk, with failure rates between 38 percent and 45 percent. The research also revealed LLMs failed to secure code against cross-site scripting (CWE-80) and log injection (CWE-117) in 86 percent and 88 percent of cases, respectively. "Despite the advances in AI-assisted development, it is clear security hasn't kept pace," Wessling said. "Our research shows models are getting better at coding accurately but are not improving at security. We also found larger models do not perform significantly better than smaller models, suggesting this is a systemic issue rather than an LLM scaling problem." Managing Application Risks in the AI Era While GenAI development practices like vibe coding accelerate productivity, they also amplify risks. Veracode emphasizes that organizations need a comprehensive risk management program that prevents vulnerabilities before they reach production—by integrating code quality checks and automated fixes directly into the development workflow. As organizations increasingly leverage AI-powered development, Veracode recommends taking the following proactive measures to ensure security: Integrate AI-powered tools like Veracode Fix into developer workflows to remediate security risks in real time. Leverage Static Analysis to detect flaws early and automatically, preventing vulnerable code from advancing through development pipelines. Embed security in agentic workflows to automate policy compliance and ensure AI agents enforce secure coding standards. Use Software Composition Analysis (SCA) to ensure AI-generated code does not introduce vulnerabilities from third-party dependencies and open-source components. Adopt bespoke AI-driven remediation guidance to empower developers with precise fix instructions and train them to use the recommendations effectively. Deploy a Package Firewall to automatically detect and block malicious packages, vulnerabilities, and policy violations. "AI coding assistants and agentic workflows represent the future of software development, and they will continue to evolve at a rapid pace," Wessling concluded. "The challenge facing every organization is ensuring security evolves alongside these new capabilities. Security cannot be an afterthought if we want to prevent the accumulation of massive security debt." The complete 2025 GenAI Code Security Report is available to download on the Veracode website. About Veracode Veracode is a global leader in Application Risk Management for the AI era. Powered by trillions of lines of code scans and a proprietary AI-assisted remediation engine, the Veracode platform is trusted by organizations worldwide to build and maintain secure software from code creation to cloud deployment. Thousands of the world's leading development and security teams use Veracode every second of every day to get accurate, actionable visibility of exploitable risk, achieve real-time vulnerability remediation, and reduce their security debt at scale. Veracode is a multi-award-winning company offering capabilities to secure the entire software development life cycle, including Veracode Fix, Static Analysis, Dynamic Analysis, Software Composition Analysis, Container Security, Application Security Posture Management, Malicious Package Detection, and Penetration Testing. Learn more at on the Veracode blog, and on LinkedIn and X. Copyright © 2025 Veracode, Inc. All rights reserved. Veracode is a registered trademark of Veracode, Inc. in the United States and may be registered in certain other jurisdictions. All other product names, brands or logos belong to their respective holders. All other trademarks cited herein are property of their respective owners. View source version on Contacts Press and Media: Katy GwilliamHead of Global Communications, Veracodekgwilliam@

Datadog Broadens AI Security Features To Counter Critical Threats
Datadog Broadens AI Security Features To Counter Critical Threats

Scoop

time11-06-2025

  • Business
  • Scoop

Datadog Broadens AI Security Features To Counter Critical Threats

Press Release – Datadog Launch of Code Security and new security capabilities strengthen posture across the AI stack, from data and AI models to applications. AUCKLAND – JUNE 11, 2025 – Datadog, Inc. (NASDAQ: DDOG), the monitoring and security platform for cloud applications, today announced new capabilities to detect and remediate critical security risks across customers' AI environments —from development to production—as the company further invests to secure its customers' cloud and AI applications. AI has created a new security frontier in which organisations need to rethink existing threat models as AI workloads foster new attack surfaces. Every microservice can now spin up autonomous agents that can mint secrets, ship code and call external APIs without any human intervention. This means one mistake could trigger a cascading breach across the entire tech stack. The latest innovations to Datadog's Security Platform, presented at DASH, aim to deliver a comprehensive solution to secure agentic AI workloads. 'AI has exponentially increased the ever-expanding backlog of security risks and vulnerabilities organisations deal with. This is because AI-native apps are not deterministic; they're more of a black box and have an increased surface area that leaves them open to vulnerabilities like prompt or code injection,' said Prashant Prahlad, VP of Products, Security at Datadog. 'The latest additions to Datadog's Security Platform provide preventative and responsive measures—powered by continuous runtime visibility—to strengthen the security posture of AI workloads, from development to production.' Securing AI Development Developers increasingly rely on third-party code repositories which expose them to poisoned code and hidden vulnerabilities, including those that stem from AI or LLM models, that are difficult to detect with traditional static analysis tools. To address this problem, Datadog Code Security, now Generally Available, empowers developer and security teams to detect and prioritise vulnerabilities in their custom code and open-source libraries, and uses AI to drive remediation of complex issues in both AI and traditional applications—from development to production. It also prioritises risks based on runtime threat activity and business impact, empowering teams to focus on what matters most. Deep integrations with developer tools, such as IDEs and GitHub, allow developers to remediate vulnerabilities without disrupting development pipelines. Hardening Security Posture of AI Applications AI-native applications act autonomously in non-deterministic ways, which makes them inherently vulnerable to new types of attacks that attempt to alter their behaviour such as prompt injection. To mitigate these threats, organisations need stronger security controls—such as separation of privileges, authorisation bounds, and data classification across their AI applications and the underlying infrastructure. Datadog LLM Observability, now Generally Available, monitors the integrity of AI models and performs toxicity checks that look for harmful behavior across prompts and responses within an organisation's AI applications. In addition, with Datadog Cloud Security, organisations are able to meet AI security standards such as the NIST AI framework out-of-the-box. Cloud Security detects and remediates risks such as misconfigurations, unpatched vulnerabilities, and unauthorised access to data, apps, and infrastructure. And with Sensitive Data Scanner (SDS), organisations can prevent sensitive data—such as personally identifiable information (PII)—from leaking into LLM training or inference data-sets, with support for AWS S3 and RDS instances now available in Preview. Securing AI at Runtime The evolving complexity of AI applications is making it even harder for security analysts to triage alerts, recognise threats from noise and respond on-time. AI apps are particularly vulnerable to unbound consumption attacks that lead to system degradation or substantial economic losses. The Bits AI Security Analyst, a new AI agent integrated directly into Datadog Cloud SIEM, autonomously triages security signals—starting with those generated by AWS CloudTrail—and performs in-depth investigations of potential threats. It provides context-rich, actionable recommendations to help teams mitigate risks more quickly and accurately. It also helps organisations save time and costs by providing preliminary investigations and guiding Security Operations Centers to focus on the threats that truly matter. Finally, Datadog's Workload Protection helps customers continuously monitor the interaction between LLMs and their host environment. With new LLM Isolation capabilities, available in preview, it detects and blocks the exploitation of vulnerabilities, and enforces guardrails to keep production AI models secure. To learn more about Datadog's latest AI Security capabilities, please visit: Code Security, new tools in Cloud Security, Sensitive Data Scanner, Cloud SIEM, Workload and App Protection, as well as new security capabilities in LLM Observability were announced during the keynote at DASH, Datadog's annual conference. The replay of the keynote is available here. During DASH, Datadog also announced launches in AI Observability, Applied AI, Log Management and released its Internal Developer Portal.

Datadog Expands AI Security Capabilities to Enable Comprehensive Protection from Critical AI Risks
Datadog Expands AI Security Capabilities to Enable Comprehensive Protection from Critical AI Risks

Associated Press

time10-06-2025

  • Business
  • Associated Press

Datadog Expands AI Security Capabilities to Enable Comprehensive Protection from Critical AI Risks

Launch of Code Security and new security capabilities strengthen posture across the AI stack, from data and AI models to applications New York, New York--(Newsfile Corp. - June 10, 2025) - Datadog, Inc. (NASDAQ: DDOG), the monitoring and security platform for cloud applications, today announced new capabilities to detect and remediate critical security risks across customers' AI environments -from development to production-as the company further invests to secure its customers' cloud and AI applications. AI has created a new security frontier in which organizations need to rethink existing threat models as AI workloads foster new attack surfaces. Every microservice can now spin up autonomous agents that can mint secrets, ship code and call external APIs without any human intervention. This means one mistake could trigger a cascading breach across the entire tech stack. The latest innovations to Datadog's Security Platform, presented at DASH, aim to deliver a comprehensive solution to secure agentic AI workloads. 'AI has exponentially increased the ever-expanding backlog of security risks and vulnerabilities organizations deal with. This is because AI-native apps are not deterministic; they're more of a black box and have an increased surface area that leaves them open to vulnerabilities like prompt or code injection,' said Prashant Prahlad, VP of Products, Security at Datadog. 'The latest additions to Datadog's Security Platform provide preventative and responsive measures-powered by continuous runtime visibility-to strengthen the security posture of AI workloads, from development to production.' Securing AI Development Developers increasingly rely on third-party code repositories which expose them to poisoned code and hidden vulnerabilities, including those that stem from AI or LLM models, that are difficult to detect with traditional static analysis tools. To address this problem, Datadog Code Security, now Generally Available, empowers developer and security teams to detect and prioritize vulnerabilities in their custom code and open-source libraries, and uses AI to drive remediation of complex issues in both AI and traditional applications-from development to production. It also prioritizes risks based on runtime threat activity and business impact, empowering teams to focus on what matters most. Deep integrations with developer tools, such as IDEs and GitHub, allow developers to remediate vulnerabilities without disrupting development pipelines. Hardening Security Posture of AI Applications AI-native applications act autonomously in non-deterministic ways, which makes them inherently vulnerable to new types of attacks that attempt to alter their behavior such as prompt injection. To mitigate these threats, organizations need stronger security controls-such as separation of privileges, authorization bounds, and data classification across their AI applications and the underlying infrastructure. Datadog LLM Observability, now Generally Available, monitors the integrity of AI models and performs toxicity checks that look for harmful behavior across prompts and responses within an organization's AI applications. In addition, with Datadog Cloud Security, organizations are able to meet AI security standards such as the NIST AI framework out-of-the-box. Cloud Security detects and remediates risks such as misconfigurations, unpatched vulnerabilities, and unauthorized access to data, apps, and infrastructure. And with Sensitive Data Scanner (SDS), organizations can prevent sensitive data-such as personally identifiable information (PII)-from leaking into LLM training or inference data-sets, with support for AWS S3 and RDS instances now available in Preview. Securing AI at Runtime The evolving complexity of AI applications is making it even harder for security analysts to triage alerts, recognize threats from noise and respond on-time. AI apps are particularly vulnerable to unbound consumption attacks that lead to system degradation or substantial economic losses. The Bits AI Security Analyst, a new AI agent integrated directly into Datadog Cloud SIEM, autonomously triages security signals-starting with those generated by AWS CloudTrail-and performs in-depth investigations of potential threats. It provides context-rich, actionable recommendations to help teams mitigate risks more quickly and accurately. It also helps organizations save time and costs by providing preliminary investigations and guiding Security Operations Centers to focus on the threats that truly matter. Finally, Datadog's Workload Protection helps customers continuously monitor the interaction between LLMs and their host environment. With new LLM Isolation capabilities, available in preview, it detects and blocks the exploitation of vulnerabilities, and enforces guardrails to keep production AI models secure. To learn more about Datadog's latest AI Security capabilities, please visit: Code Security, new tools in Cloud Security, Sensitive Data Scanner, Cloud SIEM, Workload and App Protection, as well as new security capabilities in LLM Observability were announced during the keynote at DASH, Datadog's annual conference. The replay of the keynote is available here. During DASH, Datadog also announced launches in AI Observability, Applied AI, Log Management and released its Internal Developer Portal. About Datadog Datadog is the observability and security platform for cloud applications. Our SaaS platform integrates and automates infrastructure monitoring, application performance monitoring, log management, user experience monitoring, cloud security and many other capabilities to provide unified, real-time observability and security for our customers' entire technology stack. Datadog is used by organizations of all sizes and across a wide range of industries to enable digital transformation and cloud migration, drive collaboration among development, operations, security and business teams, accelerate time to market for applications, reduce time to problem resolution, secure applications and infrastructure, understand user behavior and track key business metrics. Forward-Looking Statements This press release may include certain 'forward-looking statements' within the meaning of Section 27A of the Securities Act of 1933, as amended, or the Securities Act, and Section 21E of the Securities Exchange Act of 1934, as amended including statements on the benefits of new products and features. These forward-looking statements reflect our current views about our plans, intentions, expectations, strategies and prospects, which are based on the information currently available to us and on assumptions we have made. Actual results may differ materially from those described in the forward-looking statements and are subject to a variety of assumptions, uncertainties, risks and factors that are beyond our control, including those risks detailed under the caption 'Risk Factors' and elsewhere in our Securities and Exchange Commission filings and reports, including the Annual Report on Form 10-K filed with the Securities and Exchange Commission on May 6, 2025, as well as future filings and reports by us. Except as required by law, we undertake no duty or obligation to update any forward-looking statements contained in this release as a result of new information, future events, changes in expectations or otherwise. Contact Dan Haggerty [email protected] To view the source version of this press release, please visit

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store