logo
#

Latest news with #InvariantLabs

Snyk acquires Invariant Labs to boost AI-native app security
Snyk acquires Invariant Labs to boost AI-native app security

Techday NZ

time26-06-2025

  • Business
  • Techday NZ

Snyk acquires Invariant Labs to boost AI-native app security

Snyk has announced the acquisition of Invariant Labs, a move set to expand its AI security capabilities and address the increasing security demands of AI-native and agentic applications. Invariant Labs, known for its work in shaping security standards for agentic AI, will now become part of Snyk, integrating its research and technologies with Snyk's recently launched AI Trust Platform. The acquisition marks Snyk's twelfth to date and brings with it a new research and development function, Snyk Labs, to advance security for emerging AI risks. AI security integration Peter McKay, Chief Executive Officer at Snyk, commented on the impact of the acquisition: "This acquisition is an important integration into Snyk's recently launched AI Trust Platform that adds the ability to secure applications from emergent threats. Snyk can now offer customers a single platform to address both current application and agentic AI vulnerabilities." According to Snyk, the technologies and approaches developed by Invariant Labs will be absorbed into Snyk Labs, concentrating efforts on research regarding AI security, especially in relation to large language models (LLMs), autonomous agents, and multi-component protocol (MCP) systems. Snyk Labs will serve as the company's new research arm, delivering capabilities through its AI Trust Platform by focusing on threats such as tool poisoning and MCP rug pulls. With the rapid growth of AI-native software in enterprise settings, security teams are increasingly confronted with new and unfamiliar threats. Snyk's acquisition of Invariant Labs aims to provide consolidated tools and intelligence, equipping customers to manage risks associated with agent-based systems in real-time production environments. Responding to evolving risks Snyk emphasised that the integration will allow security professionals to secure not only established applications, but also the emerging generation of AI-native and agentic software that is seeing widespread adoption. This dual focus is intended to support companies dealing with risks such as unauthorised data exfiltration, agent actions beyond the intended scope, and MCP vulnerabilities. At the forefront of research on new AI risks, Invariant Labs has played a key role in identifying and naming novel attack types, including terms like "tool poisoning" and "MCP rug pulls," which are already being observed in live deployments. "With Invariant Labs, we're accelerating our ability to identify, prioritize, and neutralize the next generation of Agentic AI threats before they reach production," said Manoj Nair, Chief Innovation Officer at Snyk. "This acquisition also underscores Snyk's proactive commitment to supporting security teams navigating the urgent and unfamiliar risks of AI-native software, which is rapidly becoming the new software development default." Technology and research Invariant Labs is known for developing Guardrails, a transparent security layer for LLMs and AI agents. Guardrails enables developers to implement security controls, observe system behaviours in context, and enforce policies based on a combination of static and runtime data, human review, and incident logs. These features are designed to help developers scan for vulnerabilities and monitor agent compliance with security standards. Marc Fischer, PhD, Chief Executive Officer and co-founder of Invariant Labs, commented on the direction of the merged teams: "We've spent years researching and building the frameworks necessary to secure the AI-native future. We must understand that agent-based AI systems are a powerful new class of software, especially autonomous ones, and demand greater oversight and stronger security guarantees than traditional approaches. We're excited to join the Snyk team, as this mindset is deeply aligned with their mission." The collaboration is expected to further embed Invariant Labs' research-driven approach into Snyk's product offerings, supporting organisations with real-time defences against current and emerging AI threats. As AI adoption continues to rise, this acquisition highlights steps being taken within the cybersecurity sector to address vulnerabilities inherent to autonomous, agent-based, and AI-native systems already in use across industry.

Using AI Both Helps And Hinders Cybersecurity
Using AI Both Helps And Hinders Cybersecurity

Forbes

time28-05-2025

  • Business
  • Forbes

Using AI Both Helps And Hinders Cybersecurity

The use of generative AI in a cybersecurity context is providing examples of how it can both help and hinder security. In some cases, it seems to do both at once. Earlier this year, Microsoft unveiled a suite of Security Copilot-branded products that aim to help security teams respond to incidents. Microsoft used generative AI to augment incident management processes to add more context, helping security operators to better understand what happened, where, when, and to whom. It's a genuine improvement, though more incremental than revolutionary, not that there's anything wrong with that. As I noted at the time, augmentation of an existing process is fine for what it is, but it lacks ambition. There are plenty of existing tools for automating well-understood processes. The variability inherent to generative AI and large language models wasn't being used to best advantage. Given Microsoft's close alignment with generative AI companies, and its substantial resources, it seems only fair to expect more. Yet when that variability is embraced with too much enthusiasm, we get the opposite of improved security. Invariant Labs recently demonstrated how GitHub's MCP server can be used to expose private data using fairly straightforward prompt poisoning attack. It is barely a surprise that poorly sanitized input from uncontrolled sources might prove risky. The GitHub MCP example demonstrates that much use of generative AI is either entrenching existing poor practice or, in some cases, taking a backward step and re-introducing whole classes of sub-optimal security practice. By way of contrast, Crogl's knowledge engine takes full advantage of what machine learning, retrieval augmented generation (RAG) and large language models (LLMs) are good at. It goes beyond merely annotating existing processes and discovers what the existing process is by analyzing past incident response tickets. By connecting other security systems into the engine, Crogl is able to uncover what a highly-automated incident response should look like. Unlike Microsoft Security Copilot, Crogl is able to use the variability of generative AI to come up with a probably-good response plan for new incident types. The machine learning pattern recognition is able to detect the rough 'shape' of a potential attack and do what a human operator would do: check various systems to look for suspicious activity that indicates a likely compromise. This is just one example of differences in approach, but the key is that the technology itself is merely an enabler. While Microsoft, and GitHub, both push the Copilot brand and AI technology generally as a major selling point, Crogl uses the technology to deliver benefits to the customer. LLMs and machine learning are merely the conduit through which benefits to the customer are delivered. Microsoft's approach, and GitHub's to an extent, has focused on automating existing practices, some of which should probably not exist in the first place. Automating them entrenches poor practice and makes it difficult to remove. Crogl shows that automation can be used to uncover better ways of doing things and help put those in place instead. This is what cybersecurity desperately needs more of. It is frustrating that so much focus is placed on the technology of LLMs. The novelty of the tech can only do so much to overcome the limitations of what a product can deliver. As the market matures, we expect that companies that understand when using generative AI makes sense and, crucially, when it does not, will enjoy much greater success than those who remain fascinated by their new toy. Customers need outcomes, not just products. Hopefully that is where the focus will shift after the current AI excitement fades.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store