
Only 18% Of Critical Vulnerabilities Truly Worth Prioritising According To Datadog Report
Datadog today released its new report, the State of DevSecOps 2025, which found that only a fraction of critical vulnerabilities are truly worth prioritising.
To better understand the severity of a vulnerability, Datadog developed a prioritisation algorithm that factored in runtime context to its Common Vulnerability Scoring System (CVSS) base score. Adding in runtime context provided factors about a vulnerability—for example, whether the vulnerability was running in a production environment, or if the application in which the vulnerability was found was exposed to the internet—that CVSS did not take into account. This helped to reduce noise and identify the issues that are most urgent. After runtime context was applied, Datadog found that only 18% of vulnerabilities with a critical CVSS score—less than one in five—were still considered critical.
'The State of DevSecOps 2025 report found that security engineers are wasting a lot of time on vulnerabilities that aren't necessarily all that severe,' said Andrew Krug, Head of Security Advocacy at Datadog. 'The massive amount of noise security teams have to deal with is a major issue because it distracts from prioritising the really critical vulnerabilities. If defenders are able to spend less time triaging issues, they can reduce their organisations' attack surface all the faster. Focusing on easily exploitable vulnerabilities that are running in production environments for publicly exposed applications will yield the greatest real-world improvements in security posture.'
Another key finding from the report was that vulnerabilities are particularly prevalent among Java services, with 44% of applications containing a known-exploited vulnerability. The average number of applications with a known-exploited vulnerability among the other services in the report—Go, Python, .NET, PHP, Ruby and JavaScript—was only 2%.
In addition to being more likely to contain high-impact vulnerabilities, Java applications are also patched more slowly than those from other programming ecosystems. The report found that applications from the Java-based Apache Maven ecosystem took 62 days on average for library fixes, compared to 46 days for those in the .NET-based ecosystem and 19 days for applications built using npm packages, which are JavaScript-based.
Other key findings from the report include:
Attackers continue to target the software supply chain: Datadog's report identified thousands of malicious PyPI and npm libraries—some of these packages were malicious by nature and attempted to mimic a legitimate package (for instance, passports-js mimicking the legitimate passport library), a technique known as typosquatting. Others were active takeovers of popular, legitimate dependencies (such as Ultralytics, Solana web3.js, and lottie-player). These techniques are used both by state-sponsored actors and cybercriminals.
Credential management is improving, but slowly: One of the most common causes of data breaches is long-lived credentials. Last year, 63% of organisations used a form of long-lived credential at least once to authenticate GitHub Actions pipelines. This year, that number dropped to 58%, a positive sign that organisations are slowly improving their credential management processes.
Outdated libraries are a challenge for all developers: Across all programming languages, dependencies are months behind their latest major update. And those that are less frequently deployed are more likely to be using out-of-date libraries—dependencies in services that are deployed less than once a month are 47% more outdated than those deployed daily. This is an issue for developers as outdated libraries can increase the likelihood that a dependency contains unpatched, exploitable vulnerabilities.
For the report, Datadog analysed tens of thousands of applications and container images within thousands of cloud environments in order to assess the types of risks defenders need to be aware of and what practices they can adopt to improve their security posture.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
22-05-2025
- Techday NZ
Picus launches tool for real-time validation of exploitable risks
Picus Security has introduced a new capability designed to help security teams determine which vulnerabilities in their environments are actually exploitable. The new feature, called Picus Exposure Validation, uses real-time attack simulations to provide evidence-based assessments of vulnerability risks within a specific organisation's environment. This approach aims to address the challenge of large numbers of vulnerabilities that are often identified but not all requiring immediate attention or remediation. With more than 40,000 new Common Vulnerabilities and Exposures (CVEs) disclosed in 2024 - with 61% ranked as high or critical - security teams often struggle to respond effectively, as traditional vulnerability management methods can lead to inefficient allocation of resources. Picus Security says the new capability assists security teams in distinguishing between vulnerabilities that can actually be exploited in their unique systems and those that can be safely deprioritised. Traditional vulnerability management is typically driven by severity metrics such as Common Vulnerability Scoring System (CVSS) and Exploit Prediction Scoring System (EPSS), which provide generalised risk indicators but may not account for an individual organisation's existing security controls and asset criticality. Picus Exposure Validation aims to fill this gap with the Picus Exposure Score, an evidence-based, context-aware metric intended to reflect actual risk, according to the company. The system continuously tests security controls using real-world attack techniques to determine whether known vulnerabilities can be exploited given the organisation's current defences. The findings are automatically updated and presented in transparent reports, enabling quicker and more confident decision-making in response to new security threats. Volkan Ertürk, Co-Founder and Chief Technology Officer at Picus Security, commented: "The challenge today isn't finding vulnerabilities, it's knowing which ones matter in your unique environment. CVSS, EPSS and KEV offer theoretical risk signals. Picus Exposure Validation delivers proof by testing threats against your production defenses in real time. It replaces assumptions with evidence so security teams can focus on vulnerabilities that are actually exploitable." Key features highlighted by the company include the ability for security teams to more accurately prioritise remediation work, safely deprioritise less urgent vulnerabilities, and reduce manual workloads through the use of automated validation processes. The solution is said to include tailored recommendations to quickly improve the effectiveness of security controls, offering an alternative when immediate patching is not feasible. A global industrial firm reported that, upon deploying Picus Exposure Validation, it was able to reduce its list of critical patches by 85%. Based solely on CVSS ratings, 63% of the vulnerabilities in the organisation's systems were initially classified as critical. However, after applying Picus Exposure Validation, it was found that only 9% of those were truly high risk and required prioritisation. This reduction reportedly saved the organisation thousands of hours on patching activity and allowed the security team to focus resources more efficiently. The company positions Picus Exposure Validation as a new methodology for combining data about vulnerabilities with automated attack simulation to create an organisation-specific analysis of exploitability. This approach, according to Picus, offers security teams a more focused view on where to deploy efforts for mitigation and remediation and thereby enables more effective closing of security gaps. The Picus Exposure Validation feature is now available to organisations seeking enhanced vulnerability validation for their own environments. Follow us on: Share on:


Techday NZ
22-05-2025
- Techday NZ
Google enhances agent toolkit & unveils updates for secure AI
Google has announced a range of updates aimed at enhancing its intelligent agent development tools, management platforms, and agent-to-agent protocols. The company has released the production-ready version 1.0.0 of its Python Agent Development Kit (ADK). This development, according to Google, marks a significant milestone in providing stability and flexibility for building sophisticated software agents that can be deployed in live environments. The Python ADK has already been in use by several organisations, including Renault Group, Box and Revionics, who have offered feedback during its earlier phases. Alongside the Python toolset, Google is extending its ADK to support the Java ecosystem, with the initial release of Java ADK version 0.1.0. Java developers are now able to integrate the ADK into their Maven projects by including a dedicated dependency. This expansion is intended to provide Java users the same flexibility and capabilities already available to Python developers for agent development. Google states that "empowering developers with a platform that offers flexibility, trust, and comprehensive capabilities is key to realising this potential." The company's updates are "designed to help you build and manage your intelligent agents with unprecedented ease and power." Updates also include the launch of the Agent Engine UI, an interface for managing the lifecycle of intelligent agents within the Vertex AI Agent Engine. The UI offers a dashboard integrated into the Google Cloud console, providing developers with tools to inspect deployed agents, review metrics such as requests and CPU usage, trace and debug sessions, and monitor agent behaviour. This is expected to simplify the process of maintaining and optimising intelligent agents in production environments. In the area of secure agent communication, Google has rolled out version 0.2 of its Agent2Agent (A2A) protocol specification. The latest update adds support for stateless interactions to facilitate more lightweight communication when session management is unnecessary. In addition, authentication procedures have been formalised based on an OpenAPI-like schema, aiming to make security requirements clearer and improving overall reliability of agent-to-agent communications. The company is also introducing the official Python SDK for the A2A protocol to assist developers in adopting and integrating the protocol in their Python-based agents. The SDK comes with a collection of tools designed to make it easier to build and implement communication functionalities between intelligent agents. Google highlights significant industry momentum for the A2A protocol, citing new partners and projects in its growing ecosystem. Auth0 is launching sample open source agents to demonstrate the use of A2A and Auth0's GenAI authentication for secure, multi-agent communications. Box AI Agents are now able to use A2A to collaborate with external agents for managing unstructured content like scans and images, extracting key details such as dates and contractual terms, and completing complex processes without having to leave the content environment. Microsoft has announced support for the protocol in Azure AI Foundry and introduced the capability for Microsoft Copilot Studio to invoke any A2A agent. Demonstrations have also highlighted the use of A2A for workplace productivity, enabling multiple agents to be invoked with Microsoft Entra Agent ID and Microsoft Graph. SAP is integrating A2A protocol support into its AI assistant, Joule. This addition will allow Joule to orchestrate agents within its ecosystem and invoke agents built using Google ADK, thereby providing users access to agents across different systems without requiring context switching within the user interface. Zoom has committed to supporting the A2A protocol and Agentspace integration, with the objective of advancing multi-agent collaboration throughout its open platform. Google commented, "We are incredibly excited about these updates and believe they will empower you to build even more sophisticated, reliable, and impactful intelligent agent solutions. These advancements in our ADK, Agent Engine, and A2A protocol are designed to provide you with a comprehensive and flexible platform to bring your most ambitious agent-driven projects to life." The updates cover production infrastructure, user interfaces, communications protocols, and industry partnerships, reflecting broad efforts by Google to support enterprise and developer adoption of intelligent agent systems.


Techday NZ
22-05-2025
- Techday NZ
Google unveils Gemini AI upgrade for universal assistant vision
Google has detailed developments in its Gemini artificial intelligence (AI) platform and outlined plans to expand the capabilities of multimodal foundation models, with a focus on integrating these systems into products for everyday use. Over the past decade, Google has concentrated on foundational AI research, including the development of the Transformer architecture that underpins large language models. The company has also advanced AI agent systems through projects such as AlphaGo and AlphaZero, which demonstrated learning and planning in complex games. These methods have been applied to fields including quantum computing, mathematics, life sciences and algorithmic discovery. Google stated, "We've applied these techniques to make breakthroughs in quantum computing, mathematics, life sciences and algorithmic discovery. And we continue to double down on the breadth and depth of our fundamental research, working to invent the next big breakthroughs necessary for artificial general intelligence (AGI)." The company is working to enhance its Gemini 2.5 Pro model into what it refers to as a "world model" that is capable of contextual understanding, planning and simulation. According to Google, "This is why we're working to extend our best multimodal foundation model, Gemini 2.5 Pro, to become a 'world model' that can make plans and imagine new experiences by understanding and simulating aspects of the world, just as the brain does." Progress in creating these models has drawn from previous work in training AI agents for complex games such as Go and StarCraft, as well as the development of tools like Genie 2, which is able to generate interactive 3D simulated environments from a single image prompt. Google stated, "We've been taking strides in this direction for a while, from our pioneering work training agents to master complex games like Go and StarCraft, to building Genie 2, which is capable of generating 3D simulated environments that you can interact with, from a single image prompt." Gemini models are already demonstrating emerging "world model" capabilities, including applying knowledge and reasoning to simulate environments, as well as robotics training. Google noted, "Already, we can see evidence of these capabilities emerging in Gemini's ability to use world knowledge and reasoning to represent and simulate natural environments, Veo's deep understanding of intuitive physics, and the way Gemini Robotics teaches robots to grasp, follow instructions and adjust on the fly." Developing Gemini into a "world model" is described by Google as a critical step in producing a universal AI assistant. The company explained, "Making Gemini a world model is a critical step in developing a new, more general and more useful kind of AI — a universal AI assistant. This is an AI that's intelligent, understands the context you are in, and that can plan and take action on your behalf, across any device." Google is aiming to transform the Gemini app into an assistant that can manage everyday administrative tasks and provide personalised recommendations. The company said, "Our ultimate vision is to transform the Gemini app into a universal AI assistant that will perform everyday tasks for us, take care of our mundane admin and surface delightful new recommendations — making us more productive and enriching our lives." This effort begins with capabilities such as video understanding, screen sharing and memory, first explored through the Project Astra research prototype. In the past year, features such as improved voice output, enhanced memory, and computer control have been integrated into Gemini Live. Google commented, "Over the past year, we've been integrating capabilities like these into Gemini Live for more people to experience today. We continue to relentlessly improve and explore new innovations at the frontier. For example, we upgraded voice output to be more natural with native audio, we've improved memory and added computer control." Google is collecting feedback from trusted testers on these capabilities, with plans to make them available through Gemini Live, new experiences in Search, the Live API for developers and additional devices such as smart glasses. The company emphasised its commitment to safety and responsibility, stating, "Through every step of this process, safety and responsibility are central to our work. We recently conducted a large research project, exploring the ethical issues surrounding advanced AI assistants, and this work continues to inform our research, development and deployment." In addition, Google is investigating how AI agentic capabilities can assist users in multitasking through Project Mariner, which is designed to support human-agent interaction, primarily in web browsers. Project Mariner agents can now complete up to ten different tasks concurrently, such as information searches, bookings, online shopping and research. "Project Mariner now includes a system of agents that can complete up to ten different tasks at a time. These agents can help you look up information, make bookings, buy things, do research and more — all at the same time," according to Google. The updated Project Mariner is available to Google AI Ultra subscribers in the United States, and its computer use capabilities will be introduced into the Gemini API, with plans for broader integration across Google products later in the year. Google concluded, "With this, and all our groundbreaking work, we're building AI that's more personal, proactive and powerful, enriching our lives, advancing the pace of scientific progress and ushering in a new golden age of discovery and wonder."