logo
#

Latest news with #ModelContextProtocol

SGNL Launches MCP Gateway to Enable Secure AI Adoption for Enterprise Workforces
SGNL Launches MCP Gateway to Enable Secure AI Adoption for Enterprise Workforces

Business Wire

time6 hours ago

  • Business
  • Business Wire

SGNL Launches MCP Gateway to Enable Secure AI Adoption for Enterprise Workforces

PALO ALTO, Calif.--(BUSINESS WIRE)--AI agents are proliferating across enterprises faster than security teams can govern—creating massive blind spots and risk. SGNL today announced that its Model Context Protocol (MCP) Gateway is live with private availability to customers. The release puts identity-first security policies in the path of every AI interaction, automatically blocking unauthorized actions while maintaining business velocity. The release puts identity-first security policies in the path of every AI interaction, automatically blocking unauthorized actions while maintaining business velocity. MCP is revolutionizing how AI agents interact with internal and external systems—enabling them to perform tasks, interact with data, and trigger workflows across the enterprise. But without robust access controls, these agents can operate unchecked, risking over-permissioned access and unintended data exposure. Because of this, enterprises have been hesitant to approve AI tools for their workforce. SGNL's MCP Gateway changes that. It brings centralized, dynamic authorization to every MCP server in the enterprise—governing access not just based on what the agent wants to do, but who they represent, where the request is coming from, and why it's being made. 'SGNL's MCP Gateway delivers more than just a technical breakthrough,' said Stephen Ward, co-founder of Brightmind Partners, former Home Depot CISO, and ex-Secret Service cybersecurity leader. 'It's a strategic game-changer that gives enterprises the levers to align AI automation with business policy in real time, bridging the critical gap between innovation and control.' Eliminating blind access in the age of autonomous IT AI agents are entering enterprise workflows faster than security teams can respond. From summarizing sensitive data to triggering downstream actions, they don't inherently understand risk, yet they operate at machine speed across dynamic contexts where traditional boundaries no longer apply. This creates a fundamental mismatch. Legacy role-based access control was designed for predictable human behavior, not autonomous systems making thousands of decisions per minute. Enterprises can't simply "IAM harder" with existing tooling because static RBAC becomes exponentially more dangerous when applied to agents that never sleep, never second-guess themselves, and correlate data in ways humans cannot. The result is blind access at scale, where broadly privileged roles and brittle permission matrices compound risk with every agent interaction. The SGNL MCP Gateway addresses this head-on with: Real-time policy enforcement between MCP clients and servers Continuous evaluation of identity, device compliance, and request context Default-deny architecture with enterprise-wide MCP server registry that grants access only to approved services when explicitly justified Centralized MCP server registry and visibility into every AI agent interaction 'The Gateway isn't just a feature—it's foundational,' said Scott Kriz, CEO and co-founder of SGNL. 'With it, we're giving customers the ability to harness AI's full potential without compromising on security and control. Our customers can now confidently adopt agent-based workflows knowing that access decisions are dynamic, contextual, and enforceable at every step.' A real-world example: stopping data loss before it happens In a common use case, an account executive attempts to use an AI agent to summarize Salesforce data from a non-compliant laptop. Without SGNL, the agent would retrieve and expose potentially sensitive customer data. With SGNL's MCP Gateway in place, contextual policy enforcement blocks the request—ensuring that only secure, compliant actions are permitted. This is just one of countless scenarios where real-time governance makes the difference between acceleration and exposure. See SGNL's MCP Gateway in action Request a demo at to see how SGNL's MCP Gateway governs AI agent access for enterprise workforces. About SGNL SGNL's modern Privileged Identity Management is redefining identity-first security for the enterprise. By decoupling credentials from identity and enabling real-time, context-aware access decisions, SGNL empowers organizations to reduce risk, streamline operations, and scale securely. Whether it's humans or AI agents, SGNL keeps your critical systems and sensitive data secure. That's why Fortune 500 companies are turning to SGNL to simplify their identity access programs and secure critical systems. Learn more at

Model Context Protocol (MCP) Explained : The New Framework Transforming AI Capabilities
Model Context Protocol (MCP) Explained : The New Framework Transforming AI Capabilities

Geeky Gadgets

time11 hours ago

  • Business
  • Geeky Gadgets

Model Context Protocol (MCP) Explained : The New Framework Transforming AI Capabilities

What if the next generation of AI systems could not only understand context but also act on it in real time? Imagine a world where large language models (LLMs) seamlessly interact with external tools, dynamically adapt to new data, and execute complex tasks with precision. This is no longer a distant vision—it's the promise of the Model Context Protocol (MCP). Developed to address the limitations of traditional LLMs, MCP is a new framework that transforms these models from passive text generators into active, reasoning agents. By allowing secure, modular, and real-time integration with external systems, MCP paves the way for smarter, more versatile AI applications. In this overview, Coding Gopher explains how MCP redefines the capabilities of LLMs by introducing a standardized approach to tool integration. From overcoming challenges like knowledge staleness and limited interactivity to allowing dynamic, multi-step operations, MCP is setting a new benchmark for AI interoperability. You'll discover the key features, modular architecture, and real-world benefits that make MCP a fantastic option for industries ranging from healthcare to customer service. As we delve deeper, you might find yourself rethinking what AI can achieve when its potential is no longer confined to static knowledge. Model Context Protocol Overview How Large Language Models Have Evolved The evolution of large language models has been marked by significant advancements, each addressing key limitations of their predecessors. Early models like GPT-2 and GPT-3 demonstrated remarkable capabilities in generating coherent and contextually relevant text. However, they were constrained by their reliance on static, pre-trained data, which limited their ability to adapt to real-time information or interact with external systems. These models excelled in generating text but lacked the ability to perform dynamic tasks or respond to evolving contexts. The introduction of in-context learning represented a notable improvement, allowing models to adapt to specific prompts and improve task performance. Yet, challenges such as scalability and modularity persisted, limiting their broader applicability. Retrieval-Augmented Generation (RAG) further advanced LLM capabilities by allowing dynamic retrieval of external information. However, these systems were primarily read-only, unable to execute actions or interact with external tools. This highlighted the need for a more robust framework to enable LLMs to perform dynamic, multi-step tasks effectively. The Emergence of Tool-Augmented Agents Tool-augmented agents emerged as a promising solution to the limitations of earlier LLMs. By allowing LLMs to execute actions through APIs, databases, and other external systems, these agents expanded the scope of what LLMs could achieve. However, this approach introduced new challenges, particularly in making sure consistency, security, and usability. The lack of a standardized protocol for integrating tools with LLMs created barriers to scalability and interoperability, hindering their widespread adoption. MCP addresses these challenges by providing a unified framework that formalizes the interaction between LLMs and external systems. This standardization ensures that tool-augmented agents can operate securely and efficiently, paving the way for broader adoption and more sophisticated applications. Model Context Protocol (MCP) explained Watch this video on YouTube. Below are more guides on Model Context Protocol (MCP) from our extensive range of articles. What MCP Brings to the Table MCP introduces a standardized protocol based on JSON-RPC, allowing seamless interaction between LLMs and external systems. This framework formalizes the interface between LLMs and tools, making sure secure, scalable, and dynamic integration. With MCP, LLMs can request and use external tools, data, and APIs in real time, overcoming the limitations of static knowledge and restricted context. The framework's modular design allows for the integration of new tools without requiring retraining or reconfiguration of the model. This flexibility ensures that MCP can adapt to evolving needs and technologies, making it a future-proof solution for AI integration. How MCP Works: A Modular Architecture The MCP framework is built on a modular architecture designed to assist seamless communication between LLMs and external systems. It consists of three key components: MCP Host: This component manages interactions, enforces security protocols, and routes requests between LLMs and external systems, making sure smooth and secure communication. This component manages interactions, enforces security protocols, and routes requests between LLMs and external systems, making sure smooth and secure communication. MCP Client: Acting as a translator, the MCP Client converts LLM intents into structured requests and manages connections with external tools and APIs, allowing efficient execution of tasks. Acting as a translator, the MCP Client converts LLM intents into structured requests and manages connections with external tools and APIs, allowing efficient execution of tasks. MCP Server: The server implements the MCP specification, exposing tools, resources, and prompts through structured JSON schemas, making sure consistency and reliability in interactions. This modular architecture not only enhances scalability but also ensures that the system remains secure and adaptable to new tools and technologies. Key Features of MCP MCP introduces several features that significantly enhance the capabilities of LLMs: Declarative and Self-Describing: Tools dynamically expose their capabilities, allowing LLMs to reason adaptively and perform complex tasks with greater efficiency. Tools dynamically expose their capabilities, allowing LLMs to reason adaptively and perform complex tasks with greater efficiency. Extensible and Modular: The framework supports the addition of new tools without requiring retraining or reconfiguration, making sure flexibility and scalability. The framework supports the addition of new tools without requiring retraining or reconfiguration, making sure flexibility and scalability. Support for Local and Remote Tools: MCP assists communication via standard I/O or HTTP/SSE, allowing efficient interaction with a wide range of systems and tools. These features make MCP a versatile and powerful framework for integrating LLMs with external systems, unlocking new possibilities for AI applications. Applications and Real-World Benefits MCP enables a wide range of applications by allowing LLMs to perform multi-step operations such as database queries, code execution, and personalized recommendations. It addresses critical challenges that have historically limited the effectiveness of LLMs: Knowledge Staleness: By integrating with real-time data sources, MCP ensures that LLMs remain current and relevant, enhancing their utility in dynamic environments. By integrating with real-time data sources, MCP ensures that LLMs remain current and relevant, enhancing their utility in dynamic environments. Limited Context: The ability to dynamically extend context allows LLMs to process and act on larger datasets, improving their performance on complex tasks. The ability to dynamically extend context allows LLMs to process and act on larger datasets, improving their performance on complex tasks. Inability to Act: MCP enables LLMs to execute actions, transforming them from passive text generators into active reasoning engines capable of real-world impact. These capabilities make MCP a valuable tool for industries ranging from healthcare and finance to education and customer service, where real-time reasoning and action are critical. A Universal Interface for AI Systems MCP serves as a universal interface for connecting LLMs to external systems, much like USB-C simplifies connectivity for electronic devices. This analogy underscores its role in enhancing interoperability and simplifying integration across diverse tools and platforms. By providing a standardized framework, MCP reduces the complexity of integrating LLMs with external systems, making it easier for organizations to use the full potential of AI. Core Design Principles of MCP The effectiveness and adaptability of MCP are rooted in its core design principles: Introspection: LLMs can dynamically discover and adapt to new tools and capabilities, making sure they remain versatile and effective. LLMs can dynamically discover and adapt to new tools and capabilities, making sure they remain versatile and effective. Schema-Driven Communication: Structured JSON schemas enable clear and consistent interactions, reducing the likelihood of errors and miscommunication. Structured JSON schemas enable clear and consistent interactions, reducing the likelihood of errors and miscommunication. Modular Design: The framework supports the seamless integration of new tools without disrupting existing workflows, making sure scalability and flexibility. These principles ensure that MCP remains a robust and reliable framework for integrating LLMs with external systems, setting a new standard for AI interoperability. Media Credit: The Coding Gopher Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Chat with Your Data: MindsDB Launches Open-Source AI Interface for Databases and Documents
Chat with Your Data: MindsDB Launches Open-Source AI Interface for Databases and Documents

Yahoo

time4 days ago

  • Business
  • Yahoo

Chat with Your Data: MindsDB Launches Open-Source AI Interface for Databases and Documents

MindsDB's new AI interface enables natural language conversations with enterprise datasets—unifying structured databases and unstructured knowledge with intelligent agentic orchestration SAN FRANCISCO, May 30, 2025 /PRNewswire/ -- MindsDB, the open-source enterprise AI platform, today announced the release of their AI chat interface in MindsDB Open Source. This platform enables users to interact with their connected databases and knowledge bases using natural language—merging semantic understanding and SQL querying in a single unified experience. Drawing inspiration from MindsDB's enterprise product line, the chat interface brings the advanced conversational capabilities of intelligent agents directly into the open source offering, allowing developers, data scientists, and business users alike to "talk to their data" with no-code simplicity. Solving the "Two Data Languages" ProblemFor decades, enterprises have faced a dual-language challenge in data access: SQL is powerful for querying structured databases but requires technical fluency and schema knowledge. Semantic search tools handle unstructured content but often operate in isolation. MindsDB eliminates this divide by using a conversational interface powered by its AI Agent technology—automatically interpreting user queries and orchestrating the right mix of SQL and semantic operations behind the scenes. For example, a user can ask: "What are the common themes in support tickets about feature X, and how does that correlate with user engagement metrics?" MindsDB intelligently splits this into: A semantic query to extract themes from support tickets. A parametric SQL query to retrieve structured usage data. A unified response, delivered conversationally in the UI. Breakthrough Architecture: Built for AI Agents & Human CollaborationMindsDB is underpinned by an innovative architecture tailored for AI-native systems: Model Context Protocol (MCP): Powers standardized access to data via tools exposed by the MindsDB's Federated Query Engine, enabling seamless integration with AI agents and platforms. Agent-to-Agent (A2A) Communication: Allows the Chat Agent to coordinate with specialized SQL and semantic Agents—laying the groundwork for scalable multi-agent AI systems. Knowledge Bases: Combine vector search, embedding models, and optional reranking into a coherent semantic layer, accessible via chat and programmatic APIs. Text2SQL Translation: Automatically generates accurate SQL queries from natural language inputs for relational, AI, and federated databases. A diagram in the launch blog visualizes this multi-layered architecture—from chat input through intelligent routing and query generation to synthesized output. Key Benefits and Use CasesRespond unlocks transformative capabilities for enterprise users: Democratized Access: Enables anyone in the organization to explore enterprise data and knowledge using natural language—no data analytics expertise required. Unified Insights: Combines structured and unstructured sources into a single, coherent answer. Accelerated Development: Developers can rapidly prototype AI features and agentic workflows with reusable patterns. Open Ecosystem Integration: Fully accessible via MCP (with A2A integration coming soon), enabling third-party agents and tools to interface with MindsDB as an intelligent backend. Availability and Getting StartedThe new Chat UI is now available in beta within the latest release of MindsDB Open Source. Detailed setup instructions are available in the MindsDB documentation. About MindsDBMindsDB enables humans, AI, agents, and applications to get highly accurate answers across disparate data sources and types. Unlocking AI Search and Analytics for enterprises, MindsDB unifies petabyte-scale structured and unstructured data across diverse data sources and applications. Powered by an industry-first cognitive engine that can operate anywhere (on-prem, VPC, serverless), it empowers both humans and AI with highly informed decision-making capabilities. Media ContactZubin Tavariamedia@ View original content: SOURCE MindsDB Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Wix Acquires Hour One to Boost Generative AI and Drive Innovation
Wix Acquires Hour One to Boost Generative AI and Drive Innovation

Yahoo

time26-05-2025

  • Business
  • Yahoo

Wix Acquires Hour One to Boost Generative AI and Drive Innovation

Ltd. WIX has announced the acquisition of Hour One, a pioneer in generative artificial intelligence (AI) media creation. This move further strengthens Wix's position at the forefront of AI-driven digital experiences, enhancing its capabilities in advanced web and visual design. Hour One specializes in technology that enables scalable production of studio-quality content. Its platform allows users to create personalized videos and interactive media, combining storytelling with real-time engagement. Its proprietary cloud infrastructure integrates generative AI inference with advanced 3D rendering, making it a valuable asset that positions Wix at the forefront of scalable, high-impact content creation. Ltd. price-consensus-chart | Ltd. Quote Wix highlighted that this acquisition significantly expands its AI expertise and aligns with its vision of making web creation faster, smarter and more immersive. Bringing this technology in-house allows the company to maintain greater control over front-end innovations, reduce third-party dependencies and manage costs more effectively. WIX is focusing on generative AI, which represents a significant business growth driver. It is also embedding AI assistants across its platform and continues to add new products to capitalize on the AI boom. New AI business assistants are improving operational efficiency and customer conversion rates. The recent launch of Wixel is a primary step in this direction. Wixel introduced in May 2025 is a new standalone AI-driven visual design platform designed to make professional-grade design accessible to everyone. Combining advanced AI with a user-friendly interface and robust features, Wixel empowers users to easily transform ideas into high-quality visual content with minimal effort. Before that, Wix launched the Model Context Protocol (MCP) Server, allowing developers and business owners to create production-ready Wix solutions through AI coding assistants and large language models. The MCP Server enables seamless code generation and business management using natural language via tools like Claude, Cursor and Windsurf. It provides access to key Wix functionalities such as inventory management, staff scheduling, secure checkouts, ticketing, a flexible CMS and CRM features for lead capture and back-office operations. In April, 2025, Wix introduced an AI-powered adaptive content application. This innovative tool is designed to dynamically personalize website content, tailoring it to individual visitor characteristics and user-defined instructions, thereby boosting engagement and enhancing the user experience. Wix unveiled Astro, an AI-powered business assistant designed to enhance user productivity and streamline business operations in April 2025. This innovative tool allows users to interact through a chat-based interface to perform a wide range of business and back-office tasks effortlessly. Astro represents the first in a series of AI-driven agents Wix plans to introduce, setting the foundation for increased efficiency, better monetization and business growth for users. Wix is focusing on AI to reduce user friction, enhance design quality and speed up time-to-market for customers. Recently, WIX reported first-quarter 2025 results, wherein non-GAAP earnings per share (EPS) came to $1.55 compared with $1.29 in the year-ago quarter. Quarterly revenues increased 13% year over year to $473.7 million. The top line exceeded management's guidance ($469-$473 million), driven by strong performance of the new user cohort and continued healthy engagement from existing users. Wix is focusing on AI to reduce user friction, enhance design quality and speed up time-to-market for customers. Wix continues to expect revenues to grow 12-14% in the range of $1.97-$2 billion for 2025. Management reiterated non-GAAP total gross margin at 70% and non-GAAP operating expenses to be 47-48% of 2025 net sales. WIX currently carries a Zacks Rank #3 (Hold). Shares of the company have declined 11% in the past year against the Computers - IT Services industry's growth of 5.2%. You can see the complete list of today's Zacks #1 (Strong Buy) Rank stocks here. Image Source: Zacks Investment Research Some better-ranked stocks from the broader technology space are Unisys Corporation UIS, Stem, Inc. STEM and Cognizant Technology Solutions Corporation CTSH. UIS sports a Zacks Rank #1, while STEM and CTSH carry a Zacks Rank #2 (Buy). Unisys' earnings beat the Zacks Consensus Estimate in three of the trailing four quarters while missing in one, with the average surprise being 46.85%. In the last reported quarter, UIS delivered an earnings surprise of 79.17%. The company's long-term earnings growth rate is 15%. Its shares have increased 6.8% in the past year. Stem's earnings beat the Zacks Consensus Estimate in three of the trailing four quarters while missing in one, with the average surprise being 12.34%. In the last reported quarter, STEM delivered an earnings surprise of 25%. Its shares have surged 8.3% in the past six months. Cognizant's earnings beat the Zacks Consensus Estimate in each of the trailing four quarters, with the average surprise being 6.38%. In the last reported quarter, CTSH delivered an earnings surprise of 3.36%. Its shares have increased 15.8% in the past year. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Cognizant Technology Solutions Corporation (CTSH) : Free Stock Analysis Report Unisys Corporation (UIS) : Free Stock Analysis Report Stem, Inc. (STEM) : Free Stock Analysis Report Ltd. (WIX) : Free Stock Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research

PayU rolls out MCP server to integrate AI assistants, payment systems
PayU rolls out MCP server to integrate AI assistants, payment systems

Business Standard

time26-05-2025

  • Business
  • Business Standard

PayU rolls out MCP server to integrate AI assistants, payment systems

Prosus-backed fintech PayU has joined other peers in the category to roll out a Model Context Protocol (MCP) server, where its merchants can connect their artificial intelligence (AI) assistants to the company's payment systems. Razorpay and Cashfree Payments rolled out the offering for their customers earlier this month. An MCP server acts as a bridge to communicate between AI assistants — such as Claude or VS Code — and the company's payment gateway. Think of it like a 'universal connector' — in other words, like a USB-C port — that allows different software systems to work together easily. This enables AI agents and assistants to interface directly with core application programming interfaces (APIs), streamlining integration for merchants of all sizes. PayU said early use cases include generating payment links, emailing invoices to customers, fetching payment status using invoice IDs, and checking transaction or refund statuses. The company said it was expanding the MCP server's capabilities to include the ability to create invoices and orders through AI assistants and to onboard customers efficiently. 'The future of payments lies in intelligent automation and seamless integrations with advanced tools including GenAI platforms. With our latest MCP server, we are taking a leap forward in payments technology, enabling businesses to make their financial operations simple, more efficient, all while ensuring the highest standards of security,' said Narendra Babu, Chief Technology Officer at PayU.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store