
Why You Need To Know About The Model Context Protocol (MCP)
Swedish multinational clothing design retail company Hennes & Mauritz, H&M, logo seen displayed on a ... More smartphone with an Artificial intelligence (AI) chip (Photo Illustration by Budrul Chukrut/SOPA Images/LightRocket via Getty Images)
The Model Context Protocol (MCP) is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their capabilities through MCP servers or build AI applications (MCP clients) that connect to these servers. It will accelerate the evolution of agentic commerce (a-commerce).
MCP was originally developed by Anthropic but is now also supported by OpenAI. In March, the OpenAI CEO Sam Altman said that OpenAI will add support for MCP, across its products, including the desktop app for ChatGPT. Other companies, including Block and Apollo have added MCP support for their platforms. The protocol itself allows AI models to bring in data from a variety of sources so that developers can build two-way connections between data sources and AI-powered applications, such as chatbots.
(For the technically minded: Developers expose capabilities through MCP servers and agents can then use MCP clients to connect to those servers on command. Agents query the servers to see what tools are available. The server provides metadata so that the agent knows how to use the tools. When the agent decides to use a tool, it sends a tool call request in a standardized JSON format.)
Why is this important? It is because it provides a standardized way for tools and agents to communicate and exchange context about users, tasks, data, and goals and offers:
Interoperability: MCP allows different AI models, assistants, and external applications to share context, making it easier to integrate multiple AI-powered tools and services;
Coordination: MCP helps orchestrate tasks between various AI agents and external apps, ensuring they work together smoothly without duplicating work or requiring repeated user input;
An Ecosystem: A standard like MCP enables third-party developers to build plug-ins or tools that can easily "speak the same language" as AI assistants, accelerating ecosystem growth.
Just as an example, take at look at the Google Maps MCP server. This currently offers seven capabilities to convert an address to coordinates (and vice versa), to search for places, get detailed information about a place, work out the distances between places (along with travel duration), get elevation data and, of course, to get directions.)
Manual labour.
Who cares about MCP? Well, many organisations (including retailers, banks and others) want to develop their own AI capabilities so that their agents can interact with their customers' agents. Look at retail as an example. Hari Vasudev, CTO of Walmart's US business, says they will be building agents of their own to interact with the consumers' agents to provide recommendations or additional product information, while the consumer agents could provide the retailer agents with information about preferences and so on.
Banks and retailers and others want the customers' agents to engage with the retailers' agents rather than use web pages or APIs to get the services that they want. Frank Young summarises this dynamic well, suggesting that organisations provide APIs to support simple flows (eg, subscriptions) using current infrastructure but for agentic commerce's frontier (negotiation, fraud response, optimization), implement MCP servers to capture these complex, high-value scenarios.
I find this vision of agentic commerce really exciting but in order to realise the benefits, it is important that we have the necessary infrastructure to make it safe, secure and cost-effective. MCP does not define a standard mechanism for servers and clients to mutually authenticate (is that Walmart's agent? is that Dave Birch's agent?) and nor does it set out how to delegate authentication with APIs (so that my agent can use open banking). One way to fix this would be for the MCP server to validate agent credentials against some form of registry, a rudimentary KYC for AI so that only trusted agents get in. This could be a precursor to a more sophisticated Know-Your-Agent (KYA) infrastructure.
As MCP servers are managed by independent developers and contributors, there is no centralised platform to audit, enforce, or validate security standards. This decentralised model increases the likelihood of inconsistencies in security practices, making it difficult to ensure that all MCP servers adhere to secure development principles. Moreover, the absence of a unified package management system for MCP servers complicates the installation and maintenance process, increasing the likelihood of deploying outdated or misconfigured versions. The use of unofficial installation tools across different MCP clients further introduces variability in server deployment, making it harder to maintain consistent security standards.
MCP also lacks a standardised framework for dealing with authentication of counterparties and authorisation and has no mechanism to verify identities or regulate access, without which it becomes difficult to enforce granular permissions. Since MCP also lacks a permissions model and relies on OAuth, it means that a session with a tool is either accessible or completely restricted which, as Andreessen Horowitz points out, there will be additional complexity as more agents and tools are introduced. Therefore something more will be needed and one candidate is for what is known as a policy decision point (PDP). This is a component that evaluates access control policies. Given inputs like the identity of the actor, the action, the resource, and context—it decides whether to permit or deny the operation.
Mike Schwartz, founder of cybersecurity startup Gluu, asserts that while PDPs were once heavyweight infrastructure running on servers or mainframes, PDPs using the Cedar open-source policy language are small and fast enough to run embedded in a mobile application, and should evolve as an essential component of the agentic AI stack. In 2024 AWS announced the Cedar policy syntax after extensive scientific research on the topic of automated reasoning. Importantly, Cedar is deterministic--given the same input you will always get the same answer. Determinism in security is required to build trust, which requires doing the same thing over and over. An embeddable Cedar based PDP, as Mike says, checks all the boxes for agentic AI.
This is not just another kind of e-commerce. As Jamie Smith points out, when you tell your agent 'Find me a hotel in Paris under $400 with a view of the Eiffel Tower' it doesn't just go off to Google and search. It packages the request up with your verified credentials (from your digital wallet), payment preferences, loyalty schemes (etc) with constraints like price cap, date ranges and loyalty programs. This is the 'structured context payload' that goes to the various travel sites who have the capabilities to respond to, and interact with your agent.
Unlike e-commerce, built on an internet that never had a security layer (so no digital money and no digital identity), a-commerce will be built on an infrastructure that delivers real security to market participants. Putting this secure infrastructure in place is a fantastic opportunity for fintechs and other startups who want to provide digital money and digital identity as core components. As the identification, authentication and authorisation mechanisms around MCP are standardised, there is no reason not expect the rapid acceleration of a-commerce across the mass market.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Verge
44 minutes ago
- The Verge
Sam Altman claims an average ChatGPT query uses ‘roughly one fifteenth of a teaspoon' of water
OpenAI CEO Sam Altman, in a blog post published Tuesday, says an average ChatGPT query uses about 0.000085 gallons of water, or 'roughly one fifteenth of a teaspoon.' He made the claim as part of a broader post on his predictions about how AI will change the world. 'People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes,' he says. He also argues that 'the cost of intelligence should eventually converge to near the cost of electricity.' OpenAI didn't immediately respond to a request for comment on how Altman came to those figures. AI companies have come under scrutiny for energy costs of their technology. This year, for example, researchers forecast that AI could consume more power than Bitcoin mining by the end of the year. In an article last year, The Washington Post worked with researchers to determine that a 100-word email 'generated by an AI chatbot using GPT-4' required 'a little more than 1 bottle.' The publication also found that water usage can depend on where a datacenter is located.


Forbes
an hour ago
- Forbes
The Prompt: Meta Eyes Scale AI
Welcome back to The Prompt. Meta is reportedly planning to acquire a 49% stake in data labelling behemoth Scale AI for $14.8 billion, according to The Information. The deal is slated to place Scale AI's young billionaire CEO Alexandr Wang at a top position inside Meta along with a number of Scale AI employees, who will work in a new AI lab dedicated to developing 'superintelligence'— an AI system that outperforms human capabilities. CEO Mark Zuckerberg is reportedly closely involved in assembling the team of AI researchers and has gone to great lengths like setting up a WhatsApp group called 'Recruiting Party,' personally reaching out to potential recruits and rearranging desks for researchers to sit near him, Bloomberg reported. The new lab is part of Meta's efforts to keep up in the cutthroat AI race while wrangling a string of internal issues including employee churn, management problems and delayed or disappointing product releases. Now let's get into the headlines. DATA DILEMMAS Social media network Reddit sued Anthropic for allegedly training its AI models on personal user data without permission, and continuing to do so despite telling Reddit it had stopped, Forbes reported. Reddit was an early mover in capitalizing on its rich reserve of organic human data catalogued in its discussion forums, striking licensing deals with OpenAI and Google. In a lawsuit filed last week, Reddit claimed Anthropic's bot accessed its servers 100,000 times. BIG PLAYS ChatGPT will now be able to connect to a crop of external applications such as Google Drive, DropBox and Sharepoint, allowing enterprise users to glean insights from internal documents through the chatbot. It will also be able to access meeting recordings and transcriptions. The announcement was the latest in a series of feature releases intended to increase ChatGPT's functionality and keep people engaged. OpenAI has also reached $10 billion in annualized revenue through sales of its consumer products, CNBC reported. AI DEAL OF THE WEEK Young AI coding startup Anysphere has become the face of 'vibe coding' — a phrase coined by OpenAI cofounder Andrej Karpthy describing the use of large language models to create applications when the user doesn't necessarily need to know how to program. The nascent startup has raised $900 million at a $9.9 billion valuation and claims to have about $500 million in annualized revenue. The startup is betting that AI is going to dramatically transform software engineering in the next decade, making it magnitudes easier to program applications while eliminating cumbersome aspects of the process like correcting syntax or or debugging code. All a person has to do is press tab and AI completes the line of code for you and jumps to the next spot. Also of note: Enterprise AI startup Glean raised $150 million in Series F funding at a $7.2 billion valuation. Employees use the company's AI tools to search for internal information and build AI agents (software that can carry out specific tasks end-to-end) that can resolve IT tickets, write performance reviews and help prepare for meetings. Glean claimed to have passed $100 million in annualized revenue in February. (Read our 2023 profile of the company here.) DEEP DIVE Runway AI Throngs of excited moviegoers piled into Alice Tully Hall at Lincoln Center on Thursday night to be a part of Runway's third annual AI film festival. Cristobal Valenzuela, CEO of the $3.3 billion video and photo generation AI startup, spoke to a crowd of hundreds, asking them to think less about the digital tools and AI software used to make the short films they were about to watch, and instead focus on their human elements. The winning film, Total Pixel Space by Jacob Adler, is a jumble of both realistic and impossible vivid landscapes like a flying pig, people floating in a city or inside a pool, a bloom of jellyfish and a meerkat donning a bright yellow turtleneck. The 9 minute 28 second film raises the question of how many images could possibly exist in the world. The answer: Every image is composed of thousands of pixels— coordinates of positions and colors, a coalition of numbers. The film was selected from 6000 submissions, up from 300 a year ago, as interest in experimenting with AI models has exploded over the years. For all the creative benefits of video generation AI software, TV networks and filmmakers are adopting the technology for a more pragmatic reason: to produce and edit both television shows and movies quickly and more cheaply. AMC Network, which has produced popular shows like Breaking Bad and The Walking Dead, recently announced its plans to use Runway's AI models to create marketing and TV content. Lionsgate, the studio behind blockbuster hits like The Hunger Games and The Twilight Saga, has a partnership with Runway to use its models with a goal of making films on a fraction of the budget. But several studios don't want to openly admit they're using AI due to fears of backlash from creatives, who have voiced their concerns that these AI models are trained on copyrighted data scraped from the internet without consent and compensation. Runway is also currently facing litigation from a group of artists who claim their data was illicitly used to train its AI models. WEEKLY DEMO The Department of Government Efficiency developed a faulty AI tool to review thousands of contracts at the Department of Veteran Affairs that could be cut, labelling them as 'munchable,' Propublica reported. The software, developed by a programmer who has no formal experience in AI, was prone to making errors such as hallucinating the size of contracts, misreading them and inflating their value. MODEL BEHAVIOR Autonomous vehicles became easy targets during protests against Immigration and Customs Enforcement arrests in Los Angeles over the weekend. At least five Waymo driverless vehicles that were in the area were vandalized and set ablaze amid the protests. After the incident, Waymo halted its service in parts of downtown LA.


CNN
an hour ago
- CNN
ChatGPT was down for some users
Source: CNN Popular AI chatbot ChatGPT is experiencing degraded performance after a partial outage on Tuesday, according to parent company OpenAI and the website Downdetector, which tracks outages of major web services. OpenAI said it began investigating issues at 2:36 am on Tuesday, and the problems began to spike around 5:30 am, according to Downdetector's data. In addition to ChatGPT, the company's video generator, Sora, and application programming interface for developers are affected. At its peak, Downdetector received nearly 2,000 error reports on Tuesday morning. The company is 'seeing a recovery' on its developer tools and ChatGPT, but previously said a full recovery across all impacted services could take hours. ChatGPT worked normally in the 11 am ET hour when CNN asked it a question, but as early as Tuesday afternoon, OpenAI's website and Downdetector indicate the service is experiencing problems. On Tuesday morning, OpenAI said it was 'observing elevated error rates and latency across ChatGPT' in a post on X, adding that it's 'identified the root case' and is 'working as fast as possible to fix the issue.' OpenAI pointed CNN to its status page and X post when asked for further information on the issue. The partial outage comes as ChatGPT has become a bigger presence in the office, with Glassdoor reporting that ChatGPT usage in the workplace had doubled within a year. Pew Research reports that 26% of US teens are using ChatGPT for schoolwork, up from 13% in 2023. Some users joked on social media about struggling to answer basic questions and declining productivity during the outage. 'ChatGPT is down…Which means I actually have to type out my own emails at work. Send prayers,' one X post read. Services like Zoom and X have also had high-profile outages this year. See Full Web Article