
SnapLogic unveils next-gen API management with advanced tools
SnapLogic has introduced a next-generation API management solution, designed to support enterprises seeking to enhance composability and adopt agentic architectures.
The company announced that the new API management (APIM) solution brings a rearchitected and redesigned approach intended to help organisations build, manage, and scale API ecosystems with greater efficiency. SnapLogic states the solution is user-friendly and integrates advanced security features alongside a customisable API marketplace.
SnapLogic has been offering APIM functionality as part of its integration platform since 2019. This refreshed offering enters what the company calls a new phase of API-led composability, supporting seamless integration of both internal and third-party APIs, improved governance, and streamlined API consumption.
The company's new APIM includes a Policy Studio, which offers a visual, no-code or low-code interface for administrators and developers to set, enforce, and manage API policies. This is designed to reduce operational overhead while improving governance and compliance by promoting consistency and making security policies easily applicable and reusable across assets.
Additionally, SnapLogic has introduced a Developer Hub and Marketplace for API providers and consumers. The company describes this as a customisable portal enabling the discovery, testing, and integration of APIs, with the intention of boosting adoption and collaboration within enterprise environments.
SnapLogic also claims the new APIM lowers the technical barrier for API providers, enabling those without advanced expertise to design and deploy services through intuitive workflows and built-in validation tools. This is aimed at making onboarding easier and reducing friction for users developing APIs.
The solution consolidates data and application integration, API management, and agent creation into a single platform. SnapLogic asserts this integration allows organisations to eliminate several disparate tools, simplifying technology stacks and lowering the cost of ownership.
Support has also been extended beyond traditional REST APIs to include new protocols such as MCP, which the company says helps generative AI agents connect with applications and data stores for deeper operational context.
Jeremiah Stone, Chief Technology Officer at SnapLogic, commented, "In the era of AI abundance, APIs are no longer just digital doorways—they are programmable enablers of intelligent agents and autonomous workflows. In line with this, in today's dynamic business environment, enterprises need an APIM solution that simplifies implementation, fosters flexibility, and enhances security."
"Our reimagined APIM solution makes it easier than ever for organisations to build, manage, and consume APIs, ensuring that they can quickly adapt to market changes, increase automation, and unlock new opportunities while on their composable and agentic journey."
The early response from clients includes feedback from Ian Petherbridge, IT Manager at Aurecon, a design and advisory company based in Asia Pacific: "With SnapLogic's API Management, we can accelerate our Program/Project Delivery Tool (PDT) initiative, ensuring full end-to-end visibility, insights, and management of our large-scale infrastructure projects."
"Having consolidated our applications and integrations on SnapLogic, we now have the agility to rapidly compose and reuse API-driven services, enhancing collaboration between Workday, Microsoft Dynamics, and industry-specific tools. The new APIM capabilities will streamline governance, improve security through reusable policy enforcement, and drive API adoption with a more intuitive Developer Hub, empowering Aurecon to scale efficiently, optimise operations, and respond swiftly to new business opportunities," Petherbridge stated.
The company emphasises that its APIM solution helps enterprises adapt to market dynamics, enter new customer segments, and enables API monetisation strategies. SnapLogic claims it also provides a stronger API-first foundation by bolstering security and governance, making API discoverability and reuse more accessible. Lifecycle management and integration scalability are presented as key advantages of the new offering.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
4 days ago
- Techday NZ
Vasion embeds SnapLogic platform to boost enterprise automation
SnapLogic has been selected by Vasion as its OEM integration partner to provide enterprise customers with enhanced workflow automation capabilities. Vasion has chosen to embed SnapLogic's Intelligent Integration Platform within the Vasion Automation Platform, with the intention of allowing users to more easily and securely unify systems and automate processes across multiple business functions. The partnership builds on a relationship established in late 2024, when SnapLogic was named Vasion's official integration platform-as-a-service provider. With this expanded agreement, SnapLogic's integration technologies are incorporated directly into Vasion's offering, providing customers with connectivity across thousands of legacy and modern endpoints via a low-code interface. Corey Ercanbrack, Chief Product and Technology Officer at Vasion, said: "At Vasion, we're committed to removing complexity from workflow automation. By embedding SnapLogic's integration capabilities into the Vasion Automation Platform, we're giving our customers a seamless, scalable way to unify their systems and accelerate outcomes - without the burden of heavy development. This partnership is a catalyst for the kind of intelligent automation today's organisations need to move faster and innovate with confidence." The companies stated that the integration aims to reduce the amount of coding and technical knowledge required to connect disparate systems, enabling teams to digitise content, streamline operations, and increase operational efficiency from a single platform. Bo Luongo, Global VP Sales and OEM/Embedded at SnapLogic, said: "Integration should never be a barrier to innovation. By embedding our generative integration technology into Vasion, we're enabling more organisations to unlock automation faster and with greater confidence. This OEM partnership demonstrates the power of SnapLogic's platform to deliver embedded integration experiences that align with our partners' visions." SnapLogic's contribution includes its pre-built connectors and AI-powered guidance, designed to help both end-user organisations and software providers reduce integration complexity and shorten time to value. The low-code approach within the Vasion Automation Platform enables connections with a variety of applications and data sources, from traditional legacy systems and on-premise software to modern cloud-based platforms. This development is positioned as a response to increasing demand for embedded integration platform-as-a-service capabilities across the enterprise software sector, as organisations seek to automate business operations more effectively while minimising manual intervention and technical bottlenecks. The partnership with Vasion marks an additional milestone for SnapLogic's partner ecosystem, highlighting the company's strategy of embedding its integration technologies into other platforms for wider enterprise adoption. Vasion, whose platform aims to eliminate print servers and digitise business processes, focuses on providing AI-powered automation solutions for IT infrastructure modernisation, document workflow automation, and compliance management. The integration with SnapLogic is expected to further support Vasion's customers in achieving centralised control and operational efficiency across diverse business environments.


Techday NZ
29-05-2025
- Techday NZ
Kurrent unveils open-source MCP Server for AI-driven databases
Kurrent has released its open-source MCP Server for KurrentDB, enabling developers to interact with data in the KurrentDB database using natural language and AI agents rather than traditional coding methods. The Kurrent MCP Server offers new functionalities, allowing developers not only to query data but also to create, test, and debug projections directly through conversational commands. This feature is not available in other MCP server implementations, establishing a novel approach to database interaction by integrating AI-driven workflows into the database layer. Central to this release is the introduction of a self-correcting engine, which assists in automatically identifying and fixing logic errors during the prototyping phase. This reduces the need for manual debugging loops, streamlining the development process significantly for users building or modifying projections. The software is fully open-source and released under the MIT license, with documentation and a development roadmap available on GitHub. This permits both enterprise users and open-source contributors to adopt, customise, and improve the KurrentDB MCP Server without licensing restrictions. Kurrent MCP Server supports natural language prompts for tasks such as reading streams, listing streams within the database, building and updating projections, writing events to streams, and retrieving projection status for debugging. These capabilities aim to make the visual and analytical exploration of data more accessible and conversational for users with varying levels of technical expertise. The MCP Server is compatible with a broad range of frontier AI models, such as Claude, GPT-4, and Gemini. It can be integrated with popular IDEs and agent frameworks, including Cursor and Windsurf. This compatibility enables developers to leverage their preferred tools while reducing friction points typically associated with traditional database interactions. Addressing the new approach, Kirk Dunn, CEO of Kurrent, said, "Our new MCP Server makes it possible to use the main features of the KurrentDB database, like reading and writing events to streams and using projections, in a way that's as simple as having a conversation. The system's ability to test and fix itself reduces the need for debugging and increases reliability. Copilots and AI assistants become productive database partners rather than just code generators, seamlessly interfacing with KurrentDB." The server's key functions are designed to reduce development times for database tasks, enabling a focus on higher-value project work. Eight core capabilities are available, including Read_stream, List_streams, Build_projection, Create_projection, Update_projection, Test_projection, Write_events_to_stream, and Get_projections_status. Each of these responds directly to natural language instructions provided by the developer or AI agent. Kurrent has highlighted opportunities for the open source community to participate in the MCP Server's ongoing development. Developers can contribute code, report or tackle issues, and suggest new features through the project's GitHub repository and discussion forums. Comprehensive educational resources and installation guides are intended to help developers quickly integrate the MCP Server with KurrentDB for various use cases. Lokhesh Ujhoodha, Lead Architect at Kurrent, commented, "Before, database interactions required developers to master complex query languages, understand intricate data structures, and spend significant time debugging projections and data flows. Now, everything agentic can interface with KurrentDB through this MCP Server. We're not just connecting to today's AI tools, but we're positioning for a future where AI agents autonomously manage data workflows, make analytical decisions and create business insights with minimal human intervention." Kurrent emphasises that its MCP Server aims to remove barriers historically associated with database development by supporting conversational, agent-driven workflows. This aligns with broader trends towards AI-native infrastructure in enterprise environments, where human and algorithmic agents increasingly collaborate to deliver data-driven business outcomes.


Techday NZ
21-05-2025
- Techday NZ
Red Hat unveils enhanced AI tools for hybrid cloud deployments
Red Hat has expanded its AI portfolio, introducing Red Hat AI Inference Server along with validated models and new API integrations, aimed at enabling more efficient enterprise AI deployments across diverse environments. Red Hat AI Inference Server, now included in the Red Hat AI suite, provides scalable, consistent, and cost-effective inference for hybrid cloud setups. This server is integrated into the newest releases of both Red Hat OpenShift AI and Red Hat Enterprise Linux AI, while also being available as a standalone product. The offering is designed to optimise performance, flexibility, and resource usage for organisations deploying AI-driven applications. To address the challenge many enterprises face in model selection and deployment, Red Hat has announced availability of third party validated AI models, accessible on Hugging Face. These models are tested to ensure optimal performance on the Red Hat AI platform. Red Hat also offers deployment guidance to assist customers, with select models benefiting from model compression techniques to reduce their size and increase inference speed. This approach is intended to minimise computational resources and operating costs, while the validation process helps customers remain current with the latest in generative AI innovation. The company has begun integrating the Llama Stack, developed by Meta, alongside Anthropic's Model Context Protocol (MCP), offering standardised APIs for building and deploying AI applications and agents. Currently available in developer preview in Red Hat AI, Llama Stack delivers a unified API that includes support for inference with vLLM, retrieval-augmented generation, model evaluation, guardrails, and agent functionality. MCP, meanwhile, enables AI models to connect with external tools using a standardised interface, facilitating API and plugin integrations during agent workflows. The new version of Red Hat OpenShift AI (v2.20) introduces enhancements that support the development, training, deployment, and monitoring of both generative and predictive AI models at scale. A technology preview model catalogue offers access to validated Red Hat and third party models, while distributed training capabilities via the KubeFlow Training Operator enable efficient scheduling and execution of AI model tuning across multiple nodes and GPUs. This includes support for remote direct memory access (RDMA) networking and optimised GPU utilisation, reducing operational costs. A feature store based on the Kubeflow Feast project is also available in technology preview, providing a central repository for managing and serving data, intended to improve accuracy and reusability of models. Red Hat Enterprise Linux AI 1.5 introduces updates that extend the platform's reach and its multi-language support. The system is now available on Google Cloud Marketplace, which expands customer options for running AI workloads in public cloud platforms including AWS and Azure. Enhanced language capabilities for Spanish, German, French, and Italian have been added through InstructLab, enabling model customisation in these languages. Customers are also able to bring their own teacher models for detailed tuning, with support for Japanese, Hindi, and Korean planned for the future. Additionally, the Red Hat AI InstructLab on IBM Cloud service is now generally available, aimed at simplifying model customisation and improving scalability for customers wishing to use unique data sets for AI development. Red Hat states its long-term aim is to provide a universal inference platform that allows organisations to deploy any AI model on any accelerator and across any cloud provider. The company's approach seeks to help enterprises avoid infrastructure silos and better realise the value of their investments in generative AI. Joe Fernandes, Vice President and General Manager of the AI Business Unit at Red Hat, said, "Faster, more efficient inference is emerging as the newest decision point for gen AI innovation. Red Hat AI, with enhanced inference capabilities through Red Hat AI Inference Server and a new collection of validated third-party models, helps equip organisations to deploy intelligent applications where they need to, how they need to and with the components that best meet their unique needs." Michele Rosen, Research Manager at IDC, commented on shifting enterprise AI needs: "Organisations are moving beyond initial AI explorations and are focused on practical deployments. The key to their continued success lies in the ability to be adaptable with their AI strategies to fit various environments and needs. The future of AI not only demands powerful models, but models that can be deployed with ability and cost-effectiveness. Enterprises seeking to scale their AI initiatives and deliver business value will find this flexibility absolutely essential." Red Hat's recent portfolio enhancements are in line with the views outlined by Forrester, which stated open source software will be instrumental in accelerating enterprise AI programmes.