
Miggo Introduces VulnDB to Enhance Context-Aware Vulnerability Prioritization
Security teams are currently overwhelmed by an unprecedented volume of vulnerability disclosures, which is further complicated by the exponential growth of AI within applications, leaving them without the critical context required for precise prioritization and mitigation. In 2023, NIST recorded a staggering 33,137 new CVEs, marking the highest annual total ever, with 2024 seeing a further 32% surge that strains the already backlogged National Vulnerability Database (NVD). This relentless influx amplifies the urgent need for context. Most organizations simply lack the insight or tools to quickly assess their true exposure and distinguish exploitable threats from background noise, leading to wasted resources, severe alert fatigue, and heightened risk.
Miggo's Predictive VulnDB uniquely solves this challenge to fundamentally change how teams access and prioritize their response strategies. The insights provided in the VulnDB include the vulnerable function required to be executed, precise exploitation conditions and techniques, and comprehensive root cause analysis. This deep context, seconds after disclosure, enables security and development teams to not only gain clarity for what they need to prioritize, but also how to respond with precision.
'Everyone's drowning in CVEs, but no one's telling you which ones can actually be exploited through your app,' said Itai Goldman, Co-Founder and CTO at Miggo. 'At Miggo, we don't just count CVEs — we dissect them. It's like finally getting the recipe instead of just seeing the sauce. VulnDB helps teams know not only what's vulnerable but if and why it matters, so they can take smarter action faster.'
What Makes VulnDB Different
Unlike traditional vulnerability databases that stop at metadata, VulnDB provides:
Function-Level Vulnerability Tracing: Miggo analyzes each CVE to pinpoint the exact vulnerable function within the dependency that introduces the risk, not just the package or module. Traditional DBs might say, "There's an error in this library (somewhere)." Miggo's function-level analysis says, "The error is precisely here.' This enables organizations to connect that vulnerability to their code and the actual runtime execution context within their own environment, allowing for true risk-based prioritization.
Root Cause & Exploitability Insight: VulnDB provides a clear, technical explanation of how each CVE works, including what causes it and under what conditions it becomes exploitable. This makes it easier for both security and non-security professionals to understand the severity and potential impact.
Autonomous Exploit Simulation & Dynamic WAF Protections: Before producing root cause analysis, Miggo simulates real-world exploit attempts through autonomous exploit generation. These insights drive the creation of dynamic WAF rules, which evolve alongside emerging exploitation patterns. Available exclusively to Miggo customers, these rules provide adaptive protection based on live threat intelligence.
Miggo's Predictive VulnDB is available as a free resource to the security community.
'Security isn't about knowing everything. It's about knowing what matters,' said Liad Eliyahu, Head of Research at Miggo. 'With our Predictive VulnDB, we're delivering actionable intelligence, not just data. This is the kind of signal-over-noise approach defenders need to stay ahead.'
About Miggo Security
Miggo Security is redefining application security for the cloud-native and AI era with a runtime-aware platform that provides deep security observability, context-driven prioritization, and automated protection without slowing down development. Miggo's patent-pending DeepTracing technology connects the dots between vulnerabilities, APIs, and real-time behavior to help organizations secure what matters most.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Politico
3 days ago
- Politico
How Trump's AI orders could throw states for a loop
EXAM ROOM A day after President Donald Trump unveiled plans to accelerate the adoption of artificial intelligence in health care, one expert warns that some aspects of the initiative could complicate state laws aimed at preventing discrimination in health care. Dan Silverboard, a health care attorney at Holland & Knight, says the White House AI Action Plan's requirement that the National Institute of Standards and Technology remove references to diversity, equity and inclusion could create significant challenges for state regulations. As the nation's primary technology standards body, NIST suggests standards and guidance on AI development and implementation. Within the health care sector, there's considerable concern that AI could make decisions that discriminate against certain patients. In 2022, NIST addressed the concerns by releasing recommended practices for managing bias and discrimination in AI — which Silverboard says may soon disappear. To understand the implications of the upcoming changes to NIST, Ruth sat down with Silverboard to discuss the potential impact on patient care and state regulations. This interview has been edited for length and clarity. How do states use the NIST AI framework? The NIST framework is basically a compliance plan for addressing risk posed by AI, including discrimination. The National Association of Insurance Commissioners came out with a model bulletin that will require insurance companies to have programs to mitigate risk caused by AI, things like unfair claims practices, unfair business practices and also algorithmic discrimination. And that bulletin has been adopted, I think, in 24 states, red and blue. If you have the NIST framework in place that would satisfy this requirement. And if the NIST framework no longer specifies how to mitigate discrimination risk? It bungles that. You also have specific state laws which prohibit insurance companies from using AI in such a way that results in algorithmic discrimination, Colorado, for example, California. The NIST changes complicate the enforcement of these laws. How else might removing DEI from NIST's AI framework impact how companies and developers are designing their technology? Multinational companies have to comply with more than just U.S. law. There is EU law out there. So, how the EU requirements might conflict with the requirements that come at a federal level is anybody's guess, but a multinational company would have no choice but to comply with other standards in place on the international level. MORNING MONEY: CAPITAL RISK — POLITICO's flagship financial newsletter has a new Friday edition built for the economic era we're living in: one shaped by political volatility, disruption and a wave of policy decisions with sector-wide consequences. Each week, Morning Money: Capital Risk brings sharp reporting and analysis on how political risk is moving markets and how investors are adapting. Want to know how health care regulation, tariffs or court rulings could ripple through the economy? Start here. WELCOME TO FUTURE PULSE This is where we explore the ideas and innovators shaping health care. Silverboard isn't the only one raising concerns about the AI Action Plan. Brenda Leong, director of the AI division at legal firm ZwillGen, says the call to remove references to 'misinformation' from NIST's risk frameworks is misguided. 'AI systems' tendency to generate factually inaccurate, misleading, or confidently wrong outputs — hallucinations — is a well-documented challenge,' she said. 'The plan shifts away from acknowledging this fundamental technical and safety hazard.' Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@ Ruth Reader at rreader@ or Erin Schumaker at eschumaker@ Want to share a tip securely? Message us on Signal: CarmenP.82, RuthReader.02 or ErinSchumaker.01. WORLD VIEW Hackers love Europe's hospitals, our colleague in Europe, Giedrė Peseckytė, reports. The European Union's health care sector was targeted in 309 cybersecurity incidents in 2023 — more than in any other critical sector. The cost of a major incident typically reached €300,000 or approximately $350,000. For cybercriminals, targeting health data 'is a perfect business plan,' according to Christos Xenakis, professor at the department of digital systems at the University of Piraeus, Greece. 'It's easy to steal data, and what you steal, you can sell it at a high price.' Ransomware attacks — where hackers lock data and demand a ransom — dominate the sector, an EU Agency for Cybersecurity report showed. 'They achieve two targets: One is to get the data and sell [it], and the other is to encrypt the whole system, disrupt the whole system and ask for money,' Xenakis said. Stolen data can be sold on the dark web to criminals who use it to commit identity theft, insurance fraud or blackmail. To restore disrupted systems, criminals can demand millions of euros — hackers, for instance, wanted $4.5 million for the return of the stolen data after a cyberattack on Hospital Clínic in Barcelona. The hospital refused to pay. Why it matters: Beyond the financial impact, cyberattacks pose a threat to patients' lives. The stakes became clear in a recent case in the U.K., where a patient's death was linked — among other contributing factors — to a delayed blood test result caused by a cyberattack that disrupted pathology services last summer. A critical investment: Despite the risks, only 27 percent of health care organizations in the EU have a dedicated ransomware defense program, and 40 percent don't offer any security awareness training for non-IT staff, a separate EU Agency for Cybersecurity report found. Xenakis believes the health care sector sees cybersecurity as a 'luxury,' not an essential. Health care staff are unaware of the risks, he believes, resulting in poor 'cyber hygiene.' Findings from the Finnish Innovation Fund Sitra, an independent public foundation supervised by the Finnish Parliament, back this up. While many health care organizations have cybersecurity policies in place, they're often not 'clearly communicated or consistently understood by their staff.' High personnel turnover — not just among medics but also among cybersecurity officers — further 'exacerbates training gaps and the ability to enforce cybersecurity policies.' Europe mounts a response: In response to increasing cyberattacks on health care systems, the European Commission unveiled an 'action plan' for cybersecurity in hospitals and the health care sector in January. The plan proposes establishing a European Cybersecurity Support Center for the health care sector within the EU cybersecurity agency and a specific rapid-response service. The plan also introduces 'cybersecurity vouchers,' which will enable EU countries to provide financial support to smaller health care providers for enhancing their cyber resilience. 'It's good,' said Markus Kalliola, Sitra's program director. But it 'could be stronger.' Kalliola is one of the authors of the Commission's evaluation report by Sitra, which points to murky EU governance, a lack of clear targets or budgets and a missed opportunity to establish a single functioning market for cybersecurity solutions. What's next: Whether Europe's security will feature in the commission's final hospital cybersecurity plan remains to be seen; the EU executive has just concluded a consultation and promised to put forward a refined plan by the end of the year. SMALL BYTES The Food and Drug Administration is adding more AI talent to its roster. The agency has brought on radiologist Rick Abramson to help support the agency's AI efforts, according to two current FDA staffers granted anonymity to discuss sensitive personnel matters. Abramson previously served as Vanderbilt University's first vice chair for innovation in its Department of Radiology. He also briefly served as an adviser to the Office of Management and Budget under former President Bill Clinton. The Department of Health and Human Services did not return a request for comment as of press time.


Forbes
3 days ago
- Forbes
The Case For Generally Accepted Principles For Cybersecurity
Ed Gaudet is the CEO and Founder of Censinet, a healthcare risk management platform, and member of the Health Sector Coordinating Council. Cybersecurity has become as essential to a company's resilience as its financial integrity. Yet, while financial reporting in the U.S. is underpinned by the rigorous standards of Generally Accepted Accounting Principles (GAAP), cybersecurity remains a patchwork of frameworks such as the NIST Cybersecurity Framework (CSF) and sector-specific regulations. This fragmentation leaves organizations and their stakeholders struggling to assess and communicate their cybersecurity posture with confidence. A new, unified approach—perhaps called the "Generally Accepted Principles for Cybersecurity (GAP‑C)"—is needed. The Structure And Core Principles Of GAP‑C In financial reporting, trust begins with structure. Thanks to GAAP, any stakeholder can open a company's 10-K or 10-Q and understand its financial posture. The income statement, balance sheet and cash flow statement are all built on standardized definitions, governed by established rules and audited within a consistent, transparent framework. No such structure exists for cybersecurity. Today, two companies facing identical cyber risks can report vastly different levels of exposure, with no reliable way to compare the impact of cybersecurity risk. As such, GAP-C can provide a comprehensive, cross-industry cybersecurity standard modeled after GAAP, designed to bring consistency, transparency and trust to digital risk management. The goal would be to make cybersecurity a measurable, reportable, accountable and auditable discipline, just like financial reporting. GAP‑C should be organized to mirror the structure of GAAP, with a clear, numbered set of principles and detailed guidance for each. It begins with these twelve core principles: Effective cybersecurity governance starts with a clear policy framework, defined risk oversight, cross-functional coordination and accountable leadership. These elements create the foundation for a security-driven culture—one that signals, from the top down, that protecting systems and information is as essential to the business as accurate financial reporting. Cyber risk, once the domain of regulators alone, is now a measurable factor in how insurers, credit rating agencies and lenders evaluate an organization. These stakeholders are actively assessing every organization's cybersecurity risk profile. Regular, disciplined cyber risk assessments ensure that controls and investments reflect the potential business impact of identified threats. GAP-C integrates cybersecurity into enterprise risk management, recognizing cyber threats as a key driver of financial, operational and reputational risk. Under GAP-C, organizations would be required to know and safeguard information assets through their entire lifecycle, implementing appropriate controls to protect the confidentiality, integrity and availability of those assets. By systematically managing and protecting information assets, organizations can reduce the likelihood and impact of security incidents, safeguard sensitive information and maintain the trust of customers, partners and regulators. Identity and access management (IAM) is a cornerstone of cybersecurity. GAP‑C should require organizations to implement robust IAM controls, including strong authentication, authorization and accountability mechanisms. This includes managing user identities, assigning appropriate access rights and regularly reviewing and revoking access when no longer needed. Organizations must implement continuous monitoring and threat detection capabilities to identify and respond to security incidents in a timely manner. Proactive threat detection is essential for the early identification of cyber threats. GAP‑C should mandate the deployment of monitoring tools and processes to detect suspicious activity, anomalies and potential security incidents. Incident response and recovery plans are required to minimize the impact of security incidents and restore normal operations as quickly as possible. Incident response is a critical component of overall business continuity and resilience. GAP‑C should require organizations to develop, test and maintain incident response plans, including defined roles, communication protocols and recovery procedures. With the rapid adoption of AI inside almost all organizations, third-party risk is a rapidly growing concern across the digital supply chain. GAP‑C must require organizations to evaluate the security posture of third parties, establish contractual security requirements and monitor compliance. This includes assessing the risks posed by AI, SaaS applications, cloud providers, smart devices, services, affiliates and other external partners. Compliance is a key driver of cybersecurity investment. GAP‑C must, therefore, emphasize the need for organizations to maintain awareness of and comply with relevant legal and regulatory requirements, including data protection laws, breach notification rules and industry-specific standards. Metrics are essential for demonstrating the value of cybersecurity investments and identifying areas for improvement. By requiring organizations to define, collect and analyze security metrics, reporting them to management and the board, GAP‑C can make these metrics an integral part of regular governance processes. AI has two vectors of introduction: third parties and internal projects. As organizations adopt or build AI and other innovations (e.g., quantum computing), GAP‑C should require organizations to evaluate the broader business implications and risks of these emerging technologies—such as potential bias, ethical use, data protections, offshore data sharing, model training practices, etc.—and implement appropriate controls to mitigate associated risks. Cybersecurity is not a one-time effort but an ongoing process. GAP‑C must mandate regular reviews of security controls, risk assessments and incident response plans, incorporating lessons learned and emerging best practices into the cybersecurity program. Transparency builds trust. GAP‑C should establish standardized reporting norms and disclosure requirements, which are critical for boards, auditors, investors and regulators who need to assess and compare organizational resilience across industries. GAP‑C should also outline auditability and assurance requirements, enabling independent verification of cybersecurity controls and practices. The Transformative Potential Of GAP‑C Industry leaders must standardize how organizations govern and report on cybersecurity risk. A framework like GAP-C would enable boards and executives to make informed, risk-aligned decisions while establishing a common language that brings clarity and consistency to cybersecurity across companies, investors, regulators and the public. Cybersecurity is inseparable from enterprise value. GAP‑C can offer a needed step forward in standardizing and transforming trust by building a stronger, more resilient future. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Politico
4 days ago
- Politico
Trump's new AI action plan
OPERATING ROOM President Donald Trump's announcement Wednesday about his plans for artificial intelligence include a push to grow AI adoption in health care and across the federal government by testing it in regulatory free zones. The White House AI Action Plan says the government should set up regulatory sandboxes, or regulation-free environments, where AI can be tested in real world scenarios with heavy oversight. Kev Coleman, a fellow at the Trump-aligned Paragon Health Institute, suggested such an approach last year. The strategy could allow developers to demonstrate their products' utility, he told Ruth at the time, while also giving policymakers insight that could help shape future policy. Outside the government: AI Centers of Excellence around the country will enable 'researchers, startups, and established enterprises' to test AI tools with the understanding that they will have to publicly share the data and results of their experiments. The Food and Drug Administration will oversee testing of tools related to health care with support from the National Institute of Standards and Technology. The action plan also charges NIST with convening a broad range of health care industry stakeholders — academics, company executives, nonprofits, and industry groups — to develop national standards for AI systems, including measurements for understanding how much AI increases productivity. It calls on both NIST, the National Science Foundation, and federal agencies to develop methods for evaluating the performance and reliability of AI systems using regulatory sandboxes. Inside the agencies: The action plan establishes a Chief Artificial Intelligence Officer Council to coordinate interagency collaboration on AI. This group would work with the White House's Office of Personnel Management to create a talent exchange program that would allow federal employees to be quickly detailed to other agencies in need of expertise. It would also develop an AI procurement toolbox, managed by the General Services Administration in coordination with the White House Office of Management and Budget, that would allow any federal agency to adopt a model already in use within the federal government and customize it for its own purposes. The new council is also supposed to set up a technology and capability transfer program, such that agencies can more easily share knowledge and tools. Finally, the plan requires agencies to ensure employees who could benefit from AI tools have access to them. And it asks that agencies facilitate uses of AI that could improve delivery of services to the public. The big picture: Health systems want to be sure AI tools are safe before deploying them but there is no established framework for doing that. Several industry groups are trying to get consensus on the issue. Trump has largely pursued a deregulatory approach to advancing AI, but his new plan acknowledges the industry's desire for guardrails. WELCOME TO FUTURE PULSE This is where we explore the ideas and innovators shaping health care. According to Science, researchers developing a new type of dental floss to protect against the flu ran into a challenge while testing their needleless vaccine: trying to floss a mouse. Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@ Ruth Reader at rreader@ or Erin Schumaker at eschumaker@ Want to share a tip securely? Message us on Signal: CarmenP.82, RuthReader.02 or ErinSchumaker.01. AROUND THE AGENCIES The National Institutes of Health is capping the number of grant applications researchers can submit each year. The agency posted a notice last week about the new restrictions, which limit principal investigators to six new, renewal, resubmission or revision applications each calendar year. The stated reason behind the change: the risk of researchers overwhelming reviewers with artificial intelligence-generated applications. The NIH said it had identified instances of principal investigators who submitted large numbers of applications that might have leaned heavily on AI. In one instance, an investigator submitted more than 40 different applications in one submission round. 'While AI may be a helpful tool in reducing the burden of preparing applications, the rapid submission of large numbers of research applications from a single Principal Investigator may unfairly strain NIH's application review processes,' the notice says. Since NIH policy requires that grant applications be the original work of the applicants, the agency won't consider applications 'substantially developed' by AI or with sections that are AI-generated. Reality check: The percentage of investigators submitting an average of more than six applications has been low, according to NIH. Carrie Wolinetz, a lobbyist at Lewis-Burke Associates and former senior adviser to NIH director Francis Collins, told Erin that she thinks the impact of the cap will vary by institution. 'I don't think it's a bad idea as a matter of policy. If funding is robust, it could increase the quality of applications,' Wolinetz said. 'I am a little skeptical that limiting applications somehow disincentivizes the use of AI,' she said, adding, 'Although I also don't think limiting the use of AI for application writing is a bad idea.' Do as I say: The White House acknowledged in May that a Make America Healthy Again report spearheaded by HHS Secretary Robert F. Kennedy Jr. contained 'formatting issues' and pledged to correct them. The acknowledgment came after the news outlet NOTUS reported that the MAHA report cited sources that didn't exist, a hallmark of AI use. What's next: The policy goes into effect on Sept. 25.