
Why You Need To Know About The Model Context Protocol (MCP)
Swedish multinational clothing design retail company Hennes & Mauritz, H&M, logo seen displayed on a ... More smartphone with an Artificial intelligence (AI) chip (Photo Illustration by Budrul Chukrut/SOPA Images/LightRocket via Getty Images)
The Model Context Protocol (MCP) is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their capabilities through MCP servers or build AI applications (MCP clients) that connect to these servers. It will accelerate the evolution of agentic commerce (a-commerce).
MCP was originally developed by Anthropic but is now also supported by OpenAI. In March, the OpenAI CEO Sam Altman said that OpenAI will add support for MCP, across its products, including the desktop app for ChatGPT. Other companies, including Block and Apollo have added MCP support for their platforms. The protocol itself allows AI models to bring in data from a variety of sources so that developers can build two-way connections between data sources and AI-powered applications, such as chatbots.
(For the technically minded: Developers expose capabilities through MCP servers and agents can then use MCP clients to connect to those servers on command. Agents query the servers to see what tools are available. The server provides metadata so that the agent knows how to use the tools. When the agent decides to use a tool, it sends a tool call request in a standardized JSON format.)
Why is this important? It is because it provides a standardized way for tools and agents to communicate and exchange context about users, tasks, data, and goals and offers:
Interoperability: MCP allows different AI models, assistants, and external applications to share context, making it easier to integrate multiple AI-powered tools and services;
Coordination: MCP helps orchestrate tasks between various AI agents and external apps, ensuring they work together smoothly without duplicating work or requiring repeated user input;
An Ecosystem: A standard like MCP enables third-party developers to build plug-ins or tools that can easily "speak the same language" as AI assistants, accelerating ecosystem growth.
Just as an example, take at look at the Google Maps MCP server. This currently offers seven capabilities to convert an address to coordinates (and vice versa), to search for places, get detailed information about a place, work out the distances between places (along with travel duration), get elevation data and, of course, to get directions.)
Manual labour.
Who cares about MCP? Well, many organisations (including retailers, banks and others) want to develop their own AI capabilities so that their agents can interact with their customers' agents. Look at retail as an example. Hari Vasudev, CTO of Walmart's US business, says they will be building agents of their own to interact with the consumers' agents to provide recommendations or additional product information, while the consumer agents could provide the retailer agents with information about preferences and so on.
Banks and retailers and others want the customers' agents to engage with the retailers' agents rather than use web pages or APIs to get the services that they want. Frank Young summarises this dynamic well, suggesting that organisations provide APIs to support simple flows (eg, subscriptions) using current infrastructure but for agentic commerce's frontier (negotiation, fraud response, optimization), implement MCP servers to capture these complex, high-value scenarios.
I find this vision of agentic commerce really exciting but in order to realise the benefits, it is important that we have the necessary infrastructure to make it safe, secure and cost-effective. MCP does not define a standard mechanism for servers and clients to mutually authenticate (is that Walmart's agent? is that Dave Birch's agent?) and nor does it set out how to delegate authentication with APIs (so that my agent can use open banking). One way to fix this would be for the MCP server to validate agent credentials against some form of registry, a rudimentary KYC for AI so that only trusted agents get in. This could be a precursor to a more sophisticated Know-Your-Agent (KYA) infrastructure.
As MCP servers are managed by independent developers and contributors, there is no centralised platform to audit, enforce, or validate security standards. This decentralised model increases the likelihood of inconsistencies in security practices, making it difficult to ensure that all MCP servers adhere to secure development principles. Moreover, the absence of a unified package management system for MCP servers complicates the installation and maintenance process, increasing the likelihood of deploying outdated or misconfigured versions. The use of unofficial installation tools across different MCP clients further introduces variability in server deployment, making it harder to maintain consistent security standards.
MCP also lacks a standardised framework for dealing with authentication of counterparties and authorisation and has no mechanism to verify identities or regulate access, without which it becomes difficult to enforce granular permissions. Since MCP also lacks a permissions model and relies on OAuth, it means that a session with a tool is either accessible or completely restricted which, as Andreessen Horowitz points out, there will be additional complexity as more agents and tools are introduced. Therefore something more will be needed and one candidate is for what is known as a policy decision point (PDP). This is a component that evaluates access control policies. Given inputs like the identity of the actor, the action, the resource, and context—it decides whether to permit or deny the operation.
Mike Schwartz, founder of cybersecurity startup Gluu, asserts that while PDPs were once heavyweight infrastructure running on servers or mainframes, PDPs using the Cedar open-source policy language are small and fast enough to run embedded in a mobile application, and should evolve as an essential component of the agentic AI stack. In 2024 AWS announced the Cedar policy syntax after extensive scientific research on the topic of automated reasoning. Importantly, Cedar is deterministic--given the same input you will always get the same answer. Determinism in security is required to build trust, which requires doing the same thing over and over. An embeddable Cedar based PDP, as Mike says, checks all the boxes for agentic AI.
This is not just another kind of e-commerce. As Jamie Smith points out, when you tell your agent 'Find me a hotel in Paris under $400 with a view of the Eiffel Tower' it doesn't just go off to Google and search. It packages the request up with your verified credentials (from your digital wallet), payment preferences, loyalty schemes (etc) with constraints like price cap, date ranges and loyalty programs. This is the 'structured context payload' that goes to the various travel sites who have the capabilities to respond to, and interact with your agent.
Unlike e-commerce, built on an internet that never had a security layer (so no digital money and no digital identity), a-commerce will be built on an infrastructure that delivers real security to market participants. Putting this secure infrastructure in place is a fantastic opportunity for fintechs and other startups who want to provide digital money and digital identity as core components. As the identification, authentication and authorisation mechanisms around MCP are standardised, there is no reason not expect the rapid acceleration of a-commerce across the mass market.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Yahoo
32 minutes ago
- Yahoo
Still no AI-powered, 'more personalized' Siri from Apple at WWDC 25
At this year's Worldwide Developer Conference (WWDC 25), Apple announced a slew of updates to its operating systems, services, and software, including a new look it dubbed "Liquid Glass" and a rebranded naming convention. Apple was notably quiet on one highly anticipated product: a more personalized, AI-powered Siri, which it first introduced at last year's conference. Apple's SVP of Software Engineering, Craig Federighi, only gave the Siri update a brief mention during the keynote address, saying, "As we've shared, we're continuing our work to deliver the features that make Siri even more personal. This work needed more time to reach our high-quality bar, and we look forward to sharing more about it in the coming year." The time frame of "coming year" seems to indicate that Apple won't have news before 2026. That's a significant delay in the AI era, where new models, updates, and upgrades ship at a rapid pace. First announced at WWDC 24, the more personalized Siri is expected to bring artificial intelligence updates to the beleaguered virtual assistant built into iPhone and other Apple devices. At the time, the company hyped it as the "next big step for Apple" and said Siri would be able to understand your "personal context," like your relationships, communications, routine, and more, Plus, the assistant was going to be more useful by allowing you to take action within and across your apps. While Bloomberg reported that the in-development version of the more personalized Siri was functional, it was not consistently working properly. The report said its quality issues meant Siri only performed as it should two-thirds of the time, making it not viable to ship. Apple officially announced in March it was pushing back the launch, saying the Siri update would take longer to deliver than anticipated. The company also pulled SVP of Machine Learning and AI Strategy John Giannandrea off the Siri project and put Mike Rockwell, who had worked on the Vision Pro, in charge. The shake-up indicated the company was trying to get back on track after stumbling on a major release. It also suggested Apple's AI technology was behind that of rivals, like OpenAI, Google, and Anthropic, worrying investors. In the meantime, Apple partnered with OpenAI to help close the gap; when users asked Siri questions the assistant couldn't answer, those could be directed to ChatGPT instead. With the upcoming release, iOS 26, Apple has updated its AI image generation app, Image Playground, to use ChatGPT as well. At this year's WWDC 2025, the company continued to make other AI promises, including developer access to the on-device foundation models, live translation, upgrades to Genmoji (in addition to aforementioned Image Playground), Visual Intelligence improvements, an AI "Workout Buddy" for Apple Watch, AI in Xcode, and the introduction of an updated, AI-powered version of its Shortcuts app for scripting and automation. This article originally appeared on TechCrunch at

Business Insider
40 minutes ago
- Business Insider
The 5 coolest new features coming to your iPhone in iOS 26
The company held its annual WWDC 2025 developer conference on Monday, where it announced a slew of new features across its suite of products and services, including several standouts coming to the iPhone. The new iPhone software, iOS 26, will be pre-loaded on the iPhone 17 this fall, and you'll be able to update your existing iPhone around then, too. We rounded up our favorite announcements, focusing on the new iOS 26 features we found to be the coolest or most useful. Let's dive in. The new look: Liquid Glass While it's not exclusive to the iPhone, Liquid Glass was the main event at WWDC 2025. It's Apple's new design language for its software and the company's first major software redesign for the iPhone since iOS 7. It extends across Apple's other devices too: It'll be available later this fall in software updates for Mac, iPad, Apple Watch, and Apple TV. With a translucent, glass-like aesthetic, the Liquid Glass design adapts in different environments based on content and context and is coming to everything from switches and sliders to the iPhone's home screen and control center. More Visual Intelligence features If you have an iPhone that's powerful enough to run Apple Intelligence, there are some new Visual Intelligence features coming that can tell what's on your screen to help you take various actions. If you're scrolling on social media, for example, and you find a product you like, you can press the same buttons you'd press to take a screenshot, and a new option will offer Visual Intelligence capabilities, including searching sites like Google and Etsy for the product to buy. (Android users familiar with Google's Lens AI tool have enjoyed a similar feature for a bit now.) You can also ask ChatGPT about what you see on your screen, or if you come across an event, you can add it to your calendar, prepopulated with the date, time, and location. Big improvements to your group chats Messages also get an upgrade in iOS 26. Typing indicators — the bubble with three dots that lets you know when someone is typing — are coming to group chats; these were previously only available in one-to-one messages between two people. And look out, Venmo: You can also start requesting, sending, and receiving Apple Cash in group chats. In addition, you'll soon be able to create polls and set backgrounds for a chat, whether you choose from a preset offering or use AI to make a custom one in Apple's AI image generator, Image Playground. In messages, you can start making Frankenmoji (not an Apple term, that's just ours) to combine multiple emoji or Genmoji, and even a text description of anything else you want to add, to create a unique one. Help dealing with pesky phone calls The Phone app is getting an overhaul with iOS The Phone app will look a lot different in iOS 26. Instead of the call log, voicemails, and favorites being under separate tabs, they're combined to show who's calling, what they're saying, and who you might want to call. There's also the Call Screening feature, which essentially answers unknown calls for you to determine whether or not they're spam. If the transcript looks legit, you can answer the call yourself. Hold Assist, another new feature, can detect hold music on customer service calls and keep your spot in line until an agent is available to help, so you don't have to sit there waiting for them. New travel tools Your Apple Wallet will now store your ID if you live in a participating Your Apple Maps app is getting smarter. As you move about, Maps will learn your routine and suggest better routes each day. It will know your preferred route and adapt to changes to your daily commute. With the new Visited Places feature, you don't have to rack your brain for that restaurant you went to but can't remember the name. Your iPhone will detect where you are and store it in case you need to come back to it later. As for air travel, boarding passes are getting a new look, and there will be in-airport directions to find your gate. There are also new features to help get you through TSA with fewer physical documents in your hand. In nine participating states, you'll be able to store your ID in your Apple Wallet.


CNET
an hour ago
- CNET
Microsoft Just Dropped a Free AI Video Tool, And It's Wildly Easy to Use
Microsoft has a new, free tool that lets you create AI-generated videos: the Bing Video Creator. If you've ever wanted to turn a quick idea into a video without touching editing software, Microsoft's new AI tool might be your next favorite trick. The company just rolled out Bing Video Creator, a free feature that lets you generate short videos from nothing but a text prompt. No fancy skills or timeline scrubbing required. Just type in your idea and let the AI do the rest. When I gave it a spin, it took less than a minute to churn out a five-second clip of the Bing logo bobbing in a pool alongside a flamingo and donut floatie. It's weird, fun, and kind of impressive, especially for a free tool that lives right inside your browser. If you're curious about what this AI video generator can do (or just want to make a goofy summer-themed clip), here's how it works and what to expect. A frame from the 5-second video Bing Video Creator whipped up. The water rippled gently and the floats bobbed lightly. Bing Video Generator/CNET The feature is only available on the Bing Search mobile app right now but it will be coming to Windows desktops and Copilot Search, according to the company, and is powered by OpenAI's Sora video technology. Bing Video Creator joins other major AI-driven video creation tools, including Sora from ChatGPT, Adobe Firefly, Google Veo, Runway and Meta Movie Gen. You can check out what Google's latest Veo 3 feature can do for those willing to pay for Gemini Ultra. The technology is moving quickly, with more options now available, some free and others for a fee or purchasing them in AI service subscriptions. How to use Bing Video Creator Finding or using the Bing Video Creator isn't instantly intuitive, especially if you're not already using the Bing Search app. In the Bing Search app, I accessed the feature by clicking on the box on the bottom right of the home screen. That brings up lots of apps within the app. Look for Video Creator on the bottom left. There, you can create a still image or video by typing in a text prompt. Using the Fast option, which is the default, should generate the short video in moments. You can also type "Create a video of..." directly in the app's main search bar if you don't want to hunt for the feature. You can download and share the video. When I tried it out, I found the video was not very high quality and was not easy to download directly from the app. Sharing a link to the video creation and viewing it outside the app offers an option to download the full video. Microsoft says it will keep your video creations available for 90 days. Choice of AI video generators Microsoft's entry into AI video making is giving people another free option that seems geared toward casual users. Many who work in AI businesses, such as Matt Psencik, director of security and product design research at Tanium, are following the rollout of these products, led by Sora last year. Psencik says one of them has been most impressive. "Google's launch of Veo 3 for Gemini is a standout," he tells me, "in object permanence, realistic physics and overall visual fidelity. These developments are beginning to erase the line between 'clearly AI-generated' and 'convincingly real.' " The risks, Psencik says, is that realistic video generation could be exploited with deepfakes or used to attempt to hijack someone else's identity. Most of the AI video generators have guardrails or filters on what kind of content users can request to generate, whether it's to avoid copyright issues or to prevent hate speech and propaganda. But, Psencik tells me, that's not stopping AI bots from posting fake videos online that many people can't tell apart from reality. "As AI-generated video becomes nearly indistinguishable from reality, it's only a matter of time before these tools are regularly weaponized to impersonate real people at scale," he says.