logo
#

Latest news with #SGNL

Operationalizing AI: A CISO's Guide To Adopting MCP With Confidence
Operationalizing AI: A CISO's Guide To Adopting MCP With Confidence

Forbes

time10 hours ago

  • Business
  • Forbes

Operationalizing AI: A CISO's Guide To Adopting MCP With Confidence

CTO at SGNL. Inventor of CAEP. Okta Identity 25 Listee. Standards guy at OpenID. Believes access control is critical to cybersecurity. The technology world is abuzz with the development of the model context protocol (MCP) because it unlocks powerful interactions between large language models (LLMs) and existing enterprise services. The perils of unauthorized data access can dampen enterprises' enthusiasm for adopting MCP. The result is that the promised productivity benefits of AI are harder to achieve, while simultaneously, the unauthorized use of internal data by employees using personal AI accounts grows. Here's what your organization can do to adopt AI and MCP with confidence and provide a secure alternative to unauthorized AI usage. A Quick Recap Large language models are a popular AI technology. A specialized class of LLMs is 'generative pre-trained transformers' (GPTs). Fundamentally, a GPT's behavior is like autocomplete in a word processor: By looking at the preceding text, it can predict the text that follows. The preceding text is called the context, in which the immediate prompt that you type is a part. The latest versions of GPTs are better at step-by-step reasoning, especially when prompted to 'think step by step,' allowing them to break down complex questions into logical steps before answering. Some GPT-enabled services (like ChatGPT with browsing or plugins) integrate external tools that fetch web results and supply them as context for the model to reason over. For other data that users want to bring into the context, they retrieve it themselves and provide it to the GPT. This is called 'retrieval augmented generation' (RAG), which is often automated in enterprises by external systems integrated with the LLM. Instead of this custom way of retrieval, the model context protocol provides a standardized protocol for the LLM (i.e., the model) to discover tools and resources provided by MCP servers that are available to it and communicate with them to form the context. Hence, the model context protocol. Pitfalls In Enterprise Use Of MCP Everyone is predictably excited about MCP because it unleashes a powerful way to enrich the capabilities of GPTs. If a GPT determines the several steps required to answer a question, it can reach out to the relevant MCP servers that can provide the context for each one of those steps in order to generate the answer. The trouble is, how can an enterprise ensure that the data being requested and fetched by the LLM is, in fact, permitted for the user to be retrieved? If the MCP server can modify data, then how can the enterprise ensure the user has the permissions to make those modifications? While this seems like a simple authorization question, it gets a bit more involved: • MCP servers cannot run with more access than the requesting user because each user's permissions may be different. So each MCP query must run with the requesting user's privilege. • Since user privileges are dynamic (someone working on a specific customer's case today may not have a need to access that customer's data tomorrow), it follows that MCP servers need to understand what a user has access to at the time of the query. • Enterprises often run in a permissive environment, providing users broad access based on their job function (or, often, their previous job functions too). Often, this includes sensitive customer or internal data. Human users are judicious in their use of such data in their output. Because MCP puts this same access in the hands of LLMs, the same level of judgment probably will not be exercised by the LLM in determining if some information should or should not be used. • Thus, MCP defeats the de-facto 'security through obscurity' operating model. Users won't try multiple ways of obtaining information they are not supposed to, whereas LLMs will try solving the problem in many different ways before giving up. So, if the data is accessible to the MCP server, it will find its way into the answers, revealing information it should not. Securing MCP usage Implementing the following strategies can help secure MCP for organizational use: In order for enterprises to effectively use MCP, they must adopt a 'zero standing privilege' access control strategy. Unlike in the conventional model, with 'zero standing privilege,' at any given time, the user will only have access to the data that they need to complete the specific task they are currently working on. This lays the foundation for ensuring that MCP servers do not accidentally provide data that should not be available in producing an answer. ZSP automatically implies a dynamic access control strategy because it is impossible for anyone to manually update users' permissions to what they need at any given moment. And one more thing: ZSP is great for defending against cyber breaches, too, because attackers assuming employee identities are unable to access a lot of data and cause a lot of damage. LLMs acting on behalf of a user should not be able to discover tools within MCP servers that the requesting user should not have access to at the time of execution. This can be done by ensuring that the 'list tools' call made by the MCP client is authorized using the user's identity so that the MCP server can appropriately hide tools that are not to be used by that user. MCP Servers must execute with the requesting user's privileges because if they have their own elevated privileges, then it will be hard for the downstream services to figure out what data should or should not be provided. Having the entire chain execute as the user also makes it easier to audit data usage across all systems. Conclusion MCP is a promising technology, and harnessing it with the right security guardrails can unleash employee productivity while clamping down on unauthorized AI usage. Adopting a zero standing privilege strategy with appropriate controls over MCP servers can help organizations deploy MCP with confidence. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

SGNL Launches MCP Gateway to Enable Secure AI Adoption for Enterprise Workforces
SGNL Launches MCP Gateway to Enable Secure AI Adoption for Enterprise Workforces

Business Wire

time03-06-2025

  • Business
  • Business Wire

SGNL Launches MCP Gateway to Enable Secure AI Adoption for Enterprise Workforces

PALO ALTO, Calif.--(BUSINESS WIRE)--AI agents are proliferating across enterprises faster than security teams can govern—creating massive blind spots and risk. SGNL today announced that its Model Context Protocol (MCP) Gateway is live with private availability to customers. The release puts identity-first security policies in the path of every AI interaction, automatically blocking unauthorized actions while maintaining business velocity. The release puts identity-first security policies in the path of every AI interaction, automatically blocking unauthorized actions while maintaining business velocity. MCP is revolutionizing how AI agents interact with internal and external systems—enabling them to perform tasks, interact with data, and trigger workflows across the enterprise. But without robust access controls, these agents can operate unchecked, risking over-permissioned access and unintended data exposure. Because of this, enterprises have been hesitant to approve AI tools for their workforce. SGNL's MCP Gateway changes that. It brings centralized, dynamic authorization to every MCP server in the enterprise—governing access not just based on what the agent wants to do, but who they represent, where the request is coming from, and why it's being made. 'SGNL's MCP Gateway delivers more than just a technical breakthrough,' said Stephen Ward, co-founder of Brightmind Partners, former Home Depot CISO, and ex-Secret Service cybersecurity leader. 'It's a strategic game-changer that gives enterprises the levers to align AI automation with business policy in real time, bridging the critical gap between innovation and control.' Eliminating blind access in the age of autonomous IT AI agents are entering enterprise workflows faster than security teams can respond. From summarizing sensitive data to triggering downstream actions, they don't inherently understand risk, yet they operate at machine speed across dynamic contexts where traditional boundaries no longer apply. This creates a fundamental mismatch. Legacy role-based access control was designed for predictable human behavior, not autonomous systems making thousands of decisions per minute. Enterprises can't simply "IAM harder" with existing tooling because static RBAC becomes exponentially more dangerous when applied to agents that never sleep, never second-guess themselves, and correlate data in ways humans cannot. The result is blind access at scale, where broadly privileged roles and brittle permission matrices compound risk with every agent interaction. The SGNL MCP Gateway addresses this head-on with: Real-time policy enforcement between MCP clients and servers Continuous evaluation of identity, device compliance, and request context Default-deny architecture with enterprise-wide MCP server registry that grants access only to approved services when explicitly justified Centralized MCP server registry and visibility into every AI agent interaction 'The Gateway isn't just a feature—it's foundational,' said Scott Kriz, CEO and co-founder of SGNL. 'With it, we're giving customers the ability to harness AI's full potential without compromising on security and control. Our customers can now confidently adopt agent-based workflows knowing that access decisions are dynamic, contextual, and enforceable at every step.' A real-world example: stopping data loss before it happens In a common use case, an account executive attempts to use an AI agent to summarize Salesforce data from a non-compliant laptop. Without SGNL, the agent would retrieve and expose potentially sensitive customer data. With SGNL's MCP Gateway in place, contextual policy enforcement blocks the request—ensuring that only secure, compliant actions are permitted. This is just one of countless scenarios where real-time governance makes the difference between acceleration and exposure. See SGNL's MCP Gateway in action Request a demo at to see how SGNL's MCP Gateway governs AI agent access for enterprise workforces. About SGNL SGNL's modern Privileged Identity Management is redefining identity-first security for the enterprise. By decoupling credentials from identity and enabling real-time, context-aware access decisions, SGNL empowers organizations to reduce risk, streamline operations, and scale securely. Whether it's humans or AI agents, SGNL keeps your critical systems and sensitive data secure. That's why Fortune 500 companies are turning to SGNL to simplify their identity access programs and secure critical systems. Learn more at

With MCP, AI Agents Now Have Power. SGNL Makes Sure They Use It Responsibly.
With MCP, AI Agents Now Have Power. SGNL Makes Sure They Use It Responsibly.

Yahoo

time29-03-2025

  • Business
  • Yahoo

With MCP, AI Agents Now Have Power. SGNL Makes Sure They Use It Responsibly.

MCP unlocks a new generation of AI-powered automation — but also a new class of access risk. SGNL ensures enterprises stay in control, even as agents take the wheel. PALO ALTO, Calif., March 28, 2025--(BUSINESS WIRE)--A new wave of AI-powered automation is hitting the enterprise. Agents powered by large language models (LLMs) are now capable of performing real tasks across internal systems — from updating records to analyzing data and taking action — all triggered by a simple prompt. But with that power comes risk: without proper controls, these agents can access far more than they should. Today, SGNL, the modern privileged identity management (PIM) platform, announced support for Model Context Protocol (MCP), a fast-emerging standard originally proposed by Anthropic, and now also adopted by OpenAI, that allows AI agents to integrate with real-world tools. With SGNL in place, enterprises can adopt these capabilities without opening the door to uncontrolled access, data exposure, or compliance violations. "MCP is enabling a powerful new interface for AI," said Erik Gustavson, co-founder and Chief Product Officer at SGNL. "But like every interface shift before it, from cloud to mobile to APIs, it demands a new layer of security. SGNL provides that layer with identity-aware, policy-driven decisions made in real time." Power without guardrails is a problem While MCP unlocks a new class of productivity, it also removes many of the traditional boundaries that govern access. Once authenticated, an AI agent typically has broad access to systems for the duration of its session — without the ability to distinguish between what's sensitive, confidential, or inappropriate to share. A seemingly simple prompt like "What's my projected headcount next year?" could surface data tied to layoffs or internal reorgs. Multiply that by dozens of agents operating across systems, and enterprises face an exponential increase in the complexity (and risk) of access management. "The problem isn't necessarily bad intent. It's blind access," said Marc Jordan, VP of Product Management at SGNL. "These agents don't inherently understand risk or sensitivity. Without real-time controls, it's only a matter of time before sensitive content is inadvertently exposed." SGNL: built for the agentic era Legacy role-based access control (RBAC) wasn't designed for autonomous systems. It assumes static roles, predictable patterns, and human decision-making, none of which apply in an agentic environment. With AI agents operating across tools, tasks, and teams, RBAC becomes either too permissive or too restrictive — and always too brittle. SGNL brings real-time, contextual authorization to MCP-based environments, applying the same dynamic policies used to govern human access to AI agents. Its policy-as-a-proxy architecture sits between agents and enterprise systems, making a fresh decision for every request based on: Who the requester is (user identity) What they're trying to access Why they need it (based on context) Whether policy allows it in that moment This means enterprises don't have to rely on brittle, over-privileged session tokens, legacy access controls, or hardcoded logic. SGNL integrates seamlessly, denying access by default — and granting it only when it's needed. The platform's approach is already proven at scale, protecting critical systems and data for Fortune 50 and Fortune 500 companies. Now, it extends that same protection to autonomous agents acting on their behalf. "We built SGNL for moments like this — when technology leaps forward but security isn't keeping up," added Gustavson. "AI agents should help move the business faster, not become a liability." See SGNL secure AI agents in action To see how SGNL protects against human and agent overreach, session sprawl, and silent data exposure in real time, schedule a demo today. About SGNL SGNL's modern Privileged Identity Management is redefining identity security for the enterprise with its cutting-edge identity data fabric. By decoupling credentials from identity and enabling real-time, context-aware access decisions, SGNL empowers organizations to reduce risk, streamline operations, and scale securely. That's why Fortune 500 companies are turning to SGNL to simplify their identity access programs and secure critical systems. Founded in 2021, SGNL is backed by top security technology investors, including Microsoft's M12 Venture Fund, Cisco Investments, Brightmind Partners, Costanoa Ventures, and others. Learn more at View source version on Contacts For media inquiries: press@ Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store