
Why You Need To Know About The Model Context Protocol (MCP)
Swedish multinational clothing design retail company Hennes & Mauritz, H&M, logo seen displayed on a ... More smartphone with an Artificial intelligence (AI) chip (Photo Illustration by Budrul Chukrut/SOPA Images/LightRocket via Getty Images)
The Model Context Protocol (MCP) is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their capabilities through MCP servers or build AI applications (MCP clients) that connect to these servers. It will accelerate the evolution of agentic commerce (a-commerce).
MCP was originally developed by Anthropic but is now also supported by OpenAI. In March, the OpenAI CEO Sam Altman said that OpenAI will add support for MCP, across its products, including the desktop app for ChatGPT. Other companies, including Block and Apollo have added MCP support for their platforms. The protocol itself allows AI models to bring in data from a variety of sources so that developers can build two-way connections between data sources and AI-powered applications, such as chatbots.
(For the technically minded: Developers expose capabilities through MCP servers and agents can then use MCP clients to connect to those servers on command. Agents query the servers to see what tools are available. The server provides metadata so that the agent knows how to use the tools. When the agent decides to use a tool, it sends a tool call request in a standardized JSON format.)
Why is this important? It is because it provides a standardized way for tools and agents to communicate and exchange context about users, tasks, data, and goals and offers:
Interoperability: MCP allows different AI models, assistants, and external applications to share context, making it easier to integrate multiple AI-powered tools and services;
Coordination: MCP helps orchestrate tasks between various AI agents and external apps, ensuring they work together smoothly without duplicating work or requiring repeated user input;
An Ecosystem: A standard like MCP enables third-party developers to build plug-ins or tools that can easily "speak the same language" as AI assistants, accelerating ecosystem growth.
Just as an example, take at look at the Google Maps MCP server. This currently offers seven capabilities to convert an address to coordinates (and vice versa), to search for places, get detailed information about a place, work out the distances between places (along with travel duration), get elevation data and, of course, to get directions.)
Manual labour.
Who cares about MCP? Well, many organisations (including retailers, banks and others) want to develop their own AI capabilities so that their agents can interact with their customers' agents. Look at retail as an example. Hari Vasudev, CTO of Walmart's US business, says they will be building agents of their own to interact with the consumers' agents to provide recommendations or additional product information, while the consumer agents could provide the retailer agents with information about preferences and so on.
Banks and retailers and others want the customers' agents to engage with the retailers' agents rather than use web pages or APIs to get the services that they want. Frank Young summarises this dynamic well, suggesting that organisations provide APIs to support simple flows (eg, subscriptions) using current infrastructure but for agentic commerce's frontier (negotiation, fraud response, optimization), implement MCP servers to capture these complex, high-value scenarios.
I find this vision of agentic commerce really exciting but in order to realise the benefits, it is important that we have the necessary infrastructure to make it safe, secure and cost-effective. MCP does not define a standard mechanism for servers and clients to mutually authenticate (is that Walmart's agent? is that Dave Birch's agent?) and nor does it set out how to delegate authentication with APIs (so that my agent can use open banking). One way to fix this would be for the MCP server to validate agent credentials against some form of registry, a rudimentary KYC for AI so that only trusted agents get in. This could be a precursor to a more sophisticated Know-Your-Agent (KYA) infrastructure.
As MCP servers are managed by independent developers and contributors, there is no centralised platform to audit, enforce, or validate security standards. This decentralised model increases the likelihood of inconsistencies in security practices, making it difficult to ensure that all MCP servers adhere to secure development principles. Moreover, the absence of a unified package management system for MCP servers complicates the installation and maintenance process, increasing the likelihood of deploying outdated or misconfigured versions. The use of unofficial installation tools across different MCP clients further introduces variability in server deployment, making it harder to maintain consistent security standards.
MCP also lacks a standardised framework for dealing with authentication of counterparties and authorisation and has no mechanism to verify identities or regulate access, without which it becomes difficult to enforce granular permissions. Since MCP also lacks a permissions model and relies on OAuth, it means that a session with a tool is either accessible or completely restricted which, as Andreessen Horowitz points out, there will be additional complexity as more agents and tools are introduced. Therefore something more will be needed and one candidate is for what is known as a policy decision point (PDP). This is a component that evaluates access control policies. Given inputs like the identity of the actor, the action, the resource, and context—it decides whether to permit or deny the operation.
Mike Schwartz, founder of cybersecurity startup Gluu, asserts that while PDPs were once heavyweight infrastructure running on servers or mainframes, PDPs using the Cedar open-source policy language are small and fast enough to run embedded in a mobile application, and should evolve as an essential component of the agentic AI stack. In 2024 AWS announced the Cedar policy syntax after extensive scientific research on the topic of automated reasoning. Importantly, Cedar is deterministic--given the same input you will always get the same answer. Determinism in security is required to build trust, which requires doing the same thing over and over. An embeddable Cedar based PDP, as Mike says, checks all the boxes for agentic AI.
This is not just another kind of e-commerce. As Jamie Smith points out, when you tell your agent 'Find me a hotel in Paris under $400 with a view of the Eiffel Tower' it doesn't just go off to Google and search. It packages the request up with your verified credentials (from your digital wallet), payment preferences, loyalty schemes (etc) with constraints like price cap, date ranges and loyalty programs. This is the 'structured context payload' that goes to the various travel sites who have the capabilities to respond to, and interact with your agent.
Unlike e-commerce, built on an internet that never had a security layer (so no digital money and no digital identity), a-commerce will be built on an infrastructure that delivers real security to market participants. Putting this secure infrastructure in place is a fantastic opportunity for fintechs and other startups who want to provide digital money and digital identity as core components. As the identification, authentication and authorisation mechanisms around MCP are standardised, there is no reason not expect the rapid acceleration of a-commerce across the mass market.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
43 minutes ago
- Yahoo
Goldman Sachs wants students to stop using ChatGPT in job interviews with the bank
Goldman Sachs is cautioning its young job-seekers against using AI during the interview process. Instead, the $176 billion bank is encouraging applicants to study up on the firm in preparation. Other businesses like Anthropic and Amazon have also warned candidates against deploying AI—and if they're caught, they could be disqualified. While many companies are boasting about all the efficiencies that will come with AI, some are dissuading potential hires from using it to get a leg up in interviews with recruiters and hiring managers. Goldman Sachs' campus recruitment team for the bank's private investing academy in EMEA recently sent out an email to students reminding them of its expectations for interviews, as reported by eFinancialCareers. Goldman uses video interviewing platform HireVue to pre-assess candidates and maintains a set of best practices for job-seekers. Based on the best practices guidelines, the young applicants are encouraged to prepare for interviews by studying the $176 billion firm's financial results, business principles, and core values. But they can't bank on AI to help them out. 'As a reminder, Goldman Sachs prohibits the use of any external sources, including ChatGPT or Google search engine, during the interview process,' the email noted, according to someone who saw the message. HireVue is an AI-powered talent evaluation platform, known for asking behavioral questions that reveal applicants' skills. Gen Z job-seekers might be tempted to use ChatGPT or other chatbots to game the recruitment process—but it's discouraged, and isn't the most viable option. The typical Goldman Sachs virtual interview allows for 30 seconds of prep after the question, followed by a two-minute response time, according to research from eFinancialCareers. That makes it hard for job-seekers to quickly type a prompt into the chatbot, churn out an answer, and decide what the line of attack is. Plus, the responses aren't tailored and unique to the individual, potentially hurting the interviewee more than helping. Goldman's job-seeker AI policy could seem ironic, as half of the firm's 46,000 employees have access to the technology. But other companies are navigating that same paradox as they try to fully flesh out their AI strategies in an ever-changing technological environment. Goldman Sachs isn't the only major company warning its applicants not to use AI during recruitment. The $61.5 billion AI giant Anthropic went on a hiring spree last month, but told job-seekers that they can't use the advanced technology to fill out their applications. The company argued that it wants to test the communication skills of potential hires, and AI use clouds that assessment. 'Please do not use AI assistants during the application process,' Anthropic wrote in the description for its hundreds of job postings. 'We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills.' Retail giant Amazon also doesn't like it when potential talent uses AI tools during the recruitment process. Earlier this year, the $2 trillion behemoth shared guidelines with internal recruiters, stressing that candidates who are caught using AI during job interviews should be disqualified. According to Amazon, the tools give an 'unfair advantage' that masks analysis of someone's 'authentic' capabilities. 'To ensure a fair and transparent recruitment process, please do not use gen Al tools during your interview unless explicitly permitted,' the guidelines, as reported by Business Insider, noted. 'Failure to adhere to these guidelines may result in disqualification from the recruitment process.' This story was originally featured on


San Francisco Chronicle
44 minutes ago
- San Francisco Chronicle
Nvidia chief calls AI ‘the greatest equalizer' — but warns Europe risks falling behind
PARIS (AP) — Will artificial intelligence save humanity — or destroy it? Lift up the world's poorest — or tighten the grip of a tech elite? Jensen Huang — the global chip tycoon widely predicted to become one of the world's first trillionaires — offered his answer on Wednesday: neither dystopia nor domination. AI, he said, is a tool for liberation. Wearing his signature biker jacket and mobbed by fans for selfies, the Nvidia CEO cut the figure of a tech rockstar as he took the stage at VivaTech in Paris. 'AI is the greatest equalizer of people the world has ever created,' Huang said, kicking off one of Europe's biggest technology industry fairs. Huang's core argument: AI can level the playing field, not tilt it. Critics argue Nvidia's dominance risks concentrating power in the hands of a few. But Huang insists the opposite — that by slashing computing costs and expanding access, 'we're democratizing intelligence' for startups and nations alike. But beyond the sheeny optics, Nvidia used the Paris summit to unveil a wave of infrastructure announcements across Europe, signaling a dramatic expansion of the AI chipmaker's physical and strategic footprint on the continent. In France, the company is deploying 18,000 of its new Blackwell chips with startup Mistral AI. In Germany, it's building an industrial AI cloud to support manufacturers. Similar rollouts are underway in Italy, Spain, Finland and the U.K., including a new AI lab in Britain. Other announcements include a partnership with AI startup Perplexity to bring sovereign AI models to European publishers and telecoms, a new cloud platform with Mistral AI, and work with BMW and Mercedes-Benz to train AI-powered robots for use in auto plants. The announcements underscore how central AI infrastructure has become to global strategy — and how Nvidia, now the world's most valuable chipmaker, is positioning itself as the engine behind it. As the company rolls out ever more powerful systems, critics warn the model risks creating a new kind of 'technological priesthood' — one in which only the wealthiest companies or governments can afford the compute power, energy, and elite engineering talent required to participate. That, they argue, could choke the bottom-up innovation that built the tech industry in the first place. Huang pushed back. 'Through the velocity of our innovation, we democratize,' he said, responding to a question by The Associated Press. 'We lower the cost of access to technology.' As Huang put it, these factories 'reason,' 'plan,' and 'spend a lot of time talking to' themselves, powering everything from ChatGPT to autonomous vehicles and diagnostics. But some critics warn that without guardrails, such all-seeing, self-reinforcing systems could go the way of Skynet in ' The Terminator ' movie — vast intelligence engines that outpace human control. To that, Huang offers a counter-model: layered AI governance by design. 'In the future,' he said, 'the AI that is doing the task is going to be surrounded by 70 or 80 other AIs that are supervising it, observing it, guarding it, ensuring that it doesn't go off the rails.' He likened the moment to a new industrial revolution. Just as electricity transformed the last one, Huang said, AI will power the next — and that means every country needs a national intelligence infrastructure. That's why, he explained, he's been crisscrossing the globe meeting heads of state. 'They all want AI to be part of their infrastructure,' he said. 'They want AI to be a growth manufacturing industry for them.' Europe, long praised for its leadership on digital rights, now finds itself at a crossroads. As Brussels pushes forward with world-first AI regulations, some warn that over-caution could cost the bloc its place in the global race. With the U.S. and China surging ahead and most major AI firms based elsewhere, the risk isn't just falling behind — it's becoming irrelevant. Huang has a different vision: sovereign AI. Not isolation, but autonomy — building national AI systems aligned with local values, independent of foreign tech giants. 'The data belongs to you,' Huang said. 'It belongs to your people, your country... your culture, your history, your common sense.' But fears over AI misuse remain potent — from surveillance and deepfake propaganda to job losses and algorithmic discrimination. Huang doesn't deny the risks. But he insists the technology can be kept in check — by itself. The VivaTech event was part of Huang's broader European tour. He had already appeared at London Tech Week and is scheduled to visit Germany. In Paris, he joined French President Emmanuel Macron and Mistral AI CEO Arthur Mensch to reinforce his message that AI is now a national priority.
Yahoo
an hour ago
- Yahoo
IYO, INC. SUES OPENAI AND OTHERS FOR "WILLFUL" TRADEMARK INFRINGEMENT AND UNFAIR COMPETITION
Injunction/TRO Sought Against Defendants; OpenAI Acquisition of IO at Issue SAN FRANCISCO, June 11, 2025 /PRNewswire/ -- IYO (pronounced "Eye-Oh"), a venture-funded startup spun out from Google X, has filed suit in San Francisco Federal Court against Defendants IO Products, Inc., OpenAI, Inc., OpenAI, LLC, Sam Altman, and Sir Jonathan Paul Ive ("LoveFrom") for the "willful infringement" of IYO's registered IYO trademark. IYO's latest product, the IYO ONE, is a natural language audio computer, worn on a user's ear, which is voice-activated. It can read and respond to emails, do internet searches, and answer a wide variety of your natural language requests, responding in kind without use of a keyboard or screen. IYO owns U.S. federal trademark Registration No. 7,409,119, issued by the USPTO, for the mark IYO. The lawsuit was filed as a result of OpenAI's decision to use the brand name "IO" to promote and sell competing products. IYO is seeking an immediate temporary restraining order and preliminary injunction preventing Defendants from using IO. According to the Complaint, Defendants' actions are no mere coincidence. IYO has invested millions of dollars and years of effort developing its revolutionary new product, branded "IYO ONE." IYO's purpose is to develop hardware and software allowing users to do everything they currently do on a computer, phone, or tablet without using a physical interface. Defendants knew about the existence of the IYO brand and IYO's technology since at least 2022, as noted in the filing. The Complaint states that the parties had a series of meetings with representatives of OpenAI's principal Sam Altman and designers from Sir Jony Ive's design studio regarding the possibility of IYO and OpenAI working together. OpenAI kept tabs on IYO's technology, according to the filing. In the Spring of 2025, those two entities and IYO had additional meetings regarding the possibility of Defendants participating in IYO's fundraising round, as noted in the Complaint. During those meetings, Defendants stated to IYO that they thought IYO ONE had promise, and outright asked IYO to share intellectual property embodied in the IYO ONE, as plaintiffs noted in their filing. Just weeks later, OpenAI announced their acquisition of IO. This launch of a competing product is blatantly and knowingly usurping IYO's goodwill, along with the consumer recognition that IYO has built in its IYO Mark, causing confusion in the marketplace and damage to IYO, according to the Complaint. "It is amazing to me that Sam Altman, of all people, is part of a Goliath company trying to trample on the rights of our small startup, since he was always in favor of the little guy," said Jason Rugolo, cofounder of IYO. "And while they have provided no legal justification for their actions, it is noteworthy that OpenAI filed a lawsuit in a similar matter OpenAI vs. Open Artificial Intelligence, where OpenAI sought and obtained an injunction preventing another party from using the mark 'Open AI.' Here the shoe is on the other foot." Defendants are requesting injunctive relief, damages, and attorneys' fees under an exceptional case designation. View original content to download multimedia: SOURCE IYO, Inc. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data