
Operationalizing AI: A CISO's Guide To Adopting MCP With Confidence
CTO at SGNL. Inventor of CAEP. Okta Identity 25 Listee. Standards guy at OpenID. Believes access control is critical to cybersecurity.
The technology world is abuzz with the development of the model context protocol (MCP) because it unlocks powerful interactions between large language models (LLMs) and existing enterprise services. The perils of unauthorized data access can dampen enterprises' enthusiasm for adopting MCP. The result is that the promised productivity benefits of AI are harder to achieve, while simultaneously, the unauthorized use of internal data by employees using personal AI accounts grows.
Here's what your organization can do to adopt AI and MCP with confidence and provide a secure alternative to unauthorized AI usage.
A Quick Recap
Large language models are a popular AI technology. A specialized class of LLMs is 'generative pre-trained transformers' (GPTs). Fundamentally, a GPT's behavior is like autocomplete in a word processor: By looking at the preceding text, it can predict the text that follows. The preceding text is called the context, in which the immediate prompt that you type is a part. The latest versions of GPTs are better at step-by-step reasoning, especially when prompted to 'think step by step,' allowing them to break down complex questions into logical steps before answering.
Some GPT-enabled services (like ChatGPT with browsing or plugins) integrate external tools that fetch web results and supply them as context for the model to reason over. For other data that users want to bring into the context, they retrieve it themselves and provide it to the GPT. This is called 'retrieval augmented generation' (RAG), which is often automated in enterprises by external systems integrated with the LLM. Instead of this custom way of retrieval, the model context protocol provides a standardized protocol for the LLM (i.e., the model) to discover tools and resources provided by MCP servers that are available to it and communicate with them to form the context. Hence, the model context protocol.
Pitfalls In Enterprise Use Of MCP
Everyone is predictably excited about MCP because it unleashes a powerful way to enrich the capabilities of GPTs. If a GPT determines the several steps required to answer a question, it can reach out to the relevant MCP servers that can provide the context for each one of those steps in order to generate the answer.
The trouble is, how can an enterprise ensure that the data being requested and fetched by the LLM is, in fact, permitted for the user to be retrieved? If the MCP server can modify data, then how can the enterprise ensure the user has the permissions to make those modifications?
While this seems like a simple authorization question, it gets a bit more involved:
• MCP servers cannot run with more access than the requesting user because each user's permissions may be different. So each MCP query must run with the requesting user's privilege.
• Since user privileges are dynamic (someone working on a specific customer's case today may not have a need to access that customer's data tomorrow), it follows that MCP servers need to understand what a user has access to at the time of the query.
• Enterprises often run in a permissive environment, providing users broad access based on their job function (or, often, their previous job functions too). Often, this includes sensitive customer or internal data. Human users are judicious in their use of such data in their output. Because MCP puts this same access in the hands of LLMs, the same level of judgment probably will not be exercised by the LLM in determining if some information should or should not be used.
• Thus, MCP defeats the de-facto 'security through obscurity' operating model. Users won't try multiple ways of obtaining information they are not supposed to, whereas LLMs will try solving the problem in many different ways before giving up. So, if the data is accessible to the MCP server, it will find its way into the answers, revealing information it should not.
Securing MCP usage
Implementing the following strategies can help secure MCP for organizational use:
In order for enterprises to effectively use MCP, they must adopt a 'zero standing privilege' access control strategy. Unlike in the conventional model, with 'zero standing privilege,' at any given time, the user will only have access to the data that they need to complete the specific task they are currently working on.
This lays the foundation for ensuring that MCP servers do not accidentally provide data that should not be available in producing an answer.
ZSP automatically implies a dynamic access control strategy because it is impossible for anyone to manually update users' permissions to what they need at any given moment.
And one more thing: ZSP is great for defending against cyber breaches, too, because attackers assuming employee identities are unable to access a lot of data and cause a lot of damage.
LLMs acting on behalf of a user should not be able to discover tools within MCP servers that the requesting user should not have access to at the time of execution. This can be done by ensuring that the 'list tools' call made by the MCP client is authorized using the user's identity so that the MCP server can appropriately hide tools that are not to be used by that user.
MCP Servers must execute with the requesting user's privileges because if they have their own elevated privileges, then it will be hard for the downstream services to figure out what data should or should not be provided. Having the entire chain execute as the user also makes it easier to audit data usage across all systems.
Conclusion
MCP is a promising technology, and harnessing it with the right security guardrails can unleash employee productivity while clamping down on unauthorized AI usage. Adopting a zero standing privilege strategy with appropriate controls over MCP servers can help organizations deploy MCP with confidence.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Wall Street Journal
21 minutes ago
- Wall Street Journal
AI Helps Me Defeat Dyslexia
As someone born with dyslexia, I respectfully disagree with Allysia Finley's column 'AI's Biggest Threat: Young People Who Can't Think' (Life Science, June 23). For most of my life, reading dense texts and writing clearly was exhausting. Research papers took me three times as long as my peers, and my vocabulary was limited because words were hard to absorb and harder still to retrieve on demand. Then came AI, which now helps me to read, research and learn faster. I can ask complex questions and get tailored responses that help me deepen my understanding without drowning in pages of text. I can draft words that say what I'm thinking, and then refine them to express it even better. I'm not dumber because of AI. I'm more articulate, informed and engaged. The technology isn't a crutch; it's a liberator. It doesn't do the thinking for me—it gives me the tools to finally show my thinking.
Yahoo
24 minutes ago
- Yahoo
Integra Resources Launches Gold Resource Growth Drill Program at Florida Canyon Mine
Integra Resources Corp. (NYSE:ITRG) is one of the best Canadian stocks with huge upside potential. Earlier in May, Integra Resources announced the commencement of a gold resource growth-focused drill program at its primary operating asset, which is the Florida Canyon Mine in Nevada. The 2025 drill program, which began in early May, is the first phase of a multi-year growth strategy aimed at expanding mineral reserves and resources, extending mine life, and maximizing the value of Florida Canyon. The program is expected to conclude in Q3 2025, with initial results anticipated during the summer months of 2025. The 2025 drill program will consist of ~10,000 meters of reverse circulation/RC drilling, which is budgeted at ~$1.5 million. This drilling is focused on 3 key near-mine targets at Florida Canyon to support oxide mineral reserve and resource growth and extend the mine's operational life. An aerial view of a vast mining project in a remote area of a landscape. The program is designed to support a mineral resource and reserve update, along with a revised life-of-mine plan, in early 2026. The Florida Canyon Mine has a long operating history, having produced ~2.7 million ounces of gold from over 200 million tonnes of ore mined since operations began in 1986 through the end of 2024. As of December 31, 2024, proven and probable mineral reserves at Florida Canyon totaled 70.4 million tonnes at a grade of 0.35 grams per tonne gold, amounting to 785,000 ounces of gold. Integra Resources Corp. (NYSE:ITRG) is a precious metals producer that acquires, explores, and develops mineral properties in the Great Basin of the Western US. While we acknowledge the potential of ITRG as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the . READ NEXT: and . Disclosure: None. This article is originally published at Insider Monkey. Sign in to access your portfolio
Yahoo
24 minutes ago
- Yahoo
Nvidia, Foxconn Plan Humanoid Robot Deployment at Houston AI Server Factory
NVIDIA Corporation (NASDAQ:NVDA) is one of the best Fortune 500 stocks to buy according to billionaires. NVIDIA and Foxconn Technology Co. Ltd. (OTC:FXCOF) are in advanced discussions to implement humanoid robots at Foxconn's new AI server manufacturing facility in Houston, Texas. This initiative, if finalized, would mark the first time NVIDIA's products are manufactured with the assistance of humanoid robots and the first instance of Foxconn using such robots on an AI server production line. The Houston factory is slated to commence operations in early 2026 to deploy humanoid robots by the first quarter of that year to help assemble NVIDIA's GB300 AI servers. The facility's design includes extra space specifically to accommodate this type of automation. Foxconn has been actively training humanoid robots for various manufacturing tasks, such as picking & placing objects, inserting cables, and performing general assembly work, as evidenced by a company presentation in May. A close-up of a colorful high-end graphics card being plugged in to a gaming computer. Foxconn has developed its own humanoid robots in collaboration with NVIDIA and has also conducted trials with robots from China's UBTech, though the specific model and number of robots to be deployed in Houston are yet to be confirmed. Foxconn executive Leo Guo indicated last month that the company plans to unveil two humanoid robot models at its annual tech event in November: one with legs and another with a wheeled autonomous mobile base, with the latter expected to be more cost-effective. NVIDIA Corporation (NASDAQ:NVDA) is a technology company that provides graphics and compute & networking solutions internationally. Foxconn Technology Co. Ltd. (OTC:FXCOF) manufactures, processes, and sells cases, heat dissipation modules, and consumer electronics products. While we acknowledge the potential of NVDA as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the . READ NEXT: and . Disclosure: None. This article is originally published at Insider Monkey. Sign in to access your portfolio