logo
Veeam enables AI-powered business insights from backup data

Veeam enables AI-powered business insights from backup data

Techday NZ24-04-2025

Veeam Software has announced that its customers can now use backup data with AI-powered solutions through an integration with Anthropic's Model Context Protocol (MCP) open standard.
This development allows Veeam customers to search for documents using natural language, generate summaries from archived emails and tickets, automate compliance and e-discovery, and enrich AI agents and copilots with enterprise-specific information, using their own backup data stores.
The integration with MCP is intended to securely connect Veeam's backup repositories with a range of AI applications, turning previously passive backup data into an active source of business intelligence.
Niraj Tolia, CTO at Veeam, commented: "We're not just backing up data anymore - we're opening it up for intelligence. By supporting the Model Context Protocol, customers can now safely connect Veeam-protected data to the AI tools of their choice. Whether it's internal copilots, vector databases, or LLMs, Veeam ensures data is AI-ready, portable, and protected."
The capabilities enabled by MCP integration support several AI-powered functions, such as discovering and retrieving related documents with natural language queries, summarising archived communications, automating compliance monitoring, and giving AI systems richer, enterprise-specific context.
Veeam's stated aim is to change how organisations view their stored data, positioning backup repositories as strategic assets capable of delivering real-time insights.
As part of its AI roadmap, Veeam is basing its approach to artificial intelligence on five pillars: AI infrastructure resilience, data intelligence, data security, admin assist, and data resilience operations.
Through AI Infrastructure Resilience, Veeam intends to safeguard investments in AI infrastructure by ensuring applications, data, vector databases, and models are secured to the same standard as other critical business data.
The Data Intelligence pillar focuses on delivering value from backup data by enabling its use in AI applications, both from Veeam's own offerings and through customer- or partner-built solutions.
Data Security leverages machine learning techniques in Veeam's malware, ransomware, and threat detection capabilities to bolster overall cyberdefence.
Admin Assist provides AI-driven support and guidance for backup administrators, aiming to improve efficiency and decision-making in data management.
Finally, Data Resilience Operations covers the use of AI to inform backups, restores, policy creation, and risk-based sensitive data analysis.
The Model Context Protocol itself functions as an open standard for connecting AI agents to organisational systems and data repositories. Its implementation in Veeam positions the platform as a bridge between protected enterprise data and an expanding array of AI tools, from Anthropic's Claude to custom large language models built by customers.
Among the cited benefits of MCP-enabled Veeam integration are enhanced data accessibility for AI agents through context-aware search, improved decision-making through the application of Veeam-protected data, and simplified integration that eliminates the need for custom connectivity work.
Support for the Model Context Protocol will become available in future releases of Veeam Data Cloud, broadening customers' ability to use backup data in various enterprise AI contexts.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Expert warns of ‘AI wrappers' masquerading as SaaS products
Expert warns of ‘AI wrappers' masquerading as SaaS products

Techday NZ

time5 days ago

  • Techday NZ

Expert warns of ‘AI wrappers' masquerading as SaaS products

A growing concern among tech-savvy users and developers is the rise of HTML prompt wrappers, simple web interfaces layered over public large language models (LLMs) - being marketed and priced as fully-fledged SaaS products. Wrappers often mimic polished SaaS UX, tricking teams into thinking they are buying secure, scalable, enterprise-grade tools. The stark reality is, wrapping some HTML around an AI API isn't a SaaS, it's a pair of AI handcuffs. HTML Prompt Wrapper Explained At its core, a prompt wrapper is a basic HTML (or low-code) interface that collects user input, sends it to an underlying API (like OpenAI, Anthropic, or Cohere), and returns the response—often with little to no added value in between. These wrappers simply use the open APIs of LLM providers, often built in a few hours using standard web frameworks and do not add proprietary models or deep integration. The final product often lacks original functionality, security architecture, or genuine backend services. Yet, they're sometimes marketed as sophisticated AI tools with monthly subscription fees rivaling those of actual SaaS platforms with meaningful IP and infrastructure. In fact, some charge $20–$100+ per month for features that are easily replicable with a free ChatGPT account and a template prompt. Worryingly, many wrappers don't disclose where or how your data is being processed or stored. And unlike reputable SaaS platforms, these tools may lack GDPR or ISO compliance, especially if built by solo founders or hobbyists. Red flags to look out for: No clear documentation No security/compliance details Fancy UI, no visible backend Vague claims of "proprietary AI" Building a real product means solving an actual consumer need A real product demands brilliant UX and UI. More importantly, it must provide a superior experience compared to simply prompting ChatGPT, Grok, or another LLM - especially with configurable GPTs and AI Agents on the rise. If you already have a truly useful product, service, or process that real teams or customers rely on, AI can be a powerful tool to automate and enhance the experience. However, if your idea for a "product" is just an HTML gateway to AI, pause and take the following steps before moving forward: Take your idea and prompt your preferred LLM to see how quickly and easily you can generate the desired output. Map out your proposed user flow, even on paper, and count the steps you're adding. Ask yourself: Have I actually made my imagined customer's life easier? If the answer to step 3 is no, you have a fundamental amount of spin or PR will fix it. All that glitters is not gold In an era where AI is both accessible and powerful, it's easy to be dazzled by shiny interfaces. But as a business owner, marketer, or tech buyer, you need to scratch the sparkly surface to determine if what you're buying is the 'real deal.' It's also important to note that not all wrappers are bad. Some startups begin as wrappers but evolve into full-stack platforms by building layered proprietary logic, training domain-specific models and offering native integrations with CRMs, ERPs, and databases. The key is transparency and real value - the tool should be able to save you time, secure your data, and offer consistent utility.

Kurrent unveils open-source MCP Server for AI-driven databases
Kurrent unveils open-source MCP Server for AI-driven databases

Techday NZ

time29-05-2025

  • Techday NZ

Kurrent unveils open-source MCP Server for AI-driven databases

Kurrent has released its open-source MCP Server for KurrentDB, enabling developers to interact with data in the KurrentDB database using natural language and AI agents rather than traditional coding methods. The Kurrent MCP Server offers new functionalities, allowing developers not only to query data but also to create, test, and debug projections directly through conversational commands. This feature is not available in other MCP server implementations, establishing a novel approach to database interaction by integrating AI-driven workflows into the database layer. Central to this release is the introduction of a self-correcting engine, which assists in automatically identifying and fixing logic errors during the prototyping phase. This reduces the need for manual debugging loops, streamlining the development process significantly for users building or modifying projections. The software is fully open-source and released under the MIT license, with documentation and a development roadmap available on GitHub. This permits both enterprise users and open-source contributors to adopt, customise, and improve the KurrentDB MCP Server without licensing restrictions. Kurrent MCP Server supports natural language prompts for tasks such as reading streams, listing streams within the database, building and updating projections, writing events to streams, and retrieving projection status for debugging. These capabilities aim to make the visual and analytical exploration of data more accessible and conversational for users with varying levels of technical expertise. The MCP Server is compatible with a broad range of frontier AI models, such as Claude, GPT-4, and Gemini. It can be integrated with popular IDEs and agent frameworks, including Cursor and Windsurf. This compatibility enables developers to leverage their preferred tools while reducing friction points typically associated with traditional database interactions. Addressing the new approach, Kirk Dunn, CEO of Kurrent, said, "Our new MCP Server makes it possible to use the main features of the KurrentDB database, like reading and writing events to streams and using projections, in a way that's as simple as having a conversation. The system's ability to test and fix itself reduces the need for debugging and increases reliability. Copilots and AI assistants become productive database partners rather than just code generators, seamlessly interfacing with KurrentDB." The server's key functions are designed to reduce development times for database tasks, enabling a focus on higher-value project work. Eight core capabilities are available, including Read_stream, List_streams, Build_projection, Create_projection, Update_projection, Test_projection, Write_events_to_stream, and Get_projections_status. Each of these responds directly to natural language instructions provided by the developer or AI agent. Kurrent has highlighted opportunities for the open source community to participate in the MCP Server's ongoing development. Developers can contribute code, report or tackle issues, and suggest new features through the project's GitHub repository and discussion forums. Comprehensive educational resources and installation guides are intended to help developers quickly integrate the MCP Server with KurrentDB for various use cases. Lokhesh Ujhoodha, Lead Architect at Kurrent, commented, "Before, database interactions required developers to master complex query languages, understand intricate data structures, and spend significant time debugging projections and data flows. Now, everything agentic can interface with KurrentDB through this MCP Server. We're not just connecting to today's AI tools, but we're positioning for a future where AI agents autonomously manage data workflows, make analytical decisions and create business insights with minimal human intervention." Kurrent emphasises that its MCP Server aims to remove barriers historically associated with database development by supporting conversational, agent-driven workflows. This aligns with broader trends towards AI-native infrastructure in enterprise environments, where human and algorithmic agents increasingly collaborate to deliver data-driven business outcomes.

Report finds low ECH use but risks from malicious actors grow
Report finds low ECH use but risks from malicious actors grow

Techday NZ

time22-05-2025

  • Techday NZ

Report finds low ECH use but risks from malicious actors grow

Corrata has published a report examining the impact of the Encrypted Client Hello (ECH) privacy protocol on enterprise security and the adoption of the protocol by malicious actors. The Living With ECH Report analysed billions of connections made by enterprise employee mobile devices over a three-month period to assess the practical deployment and security implications of the latest privacy technology in internet communications. ECH, an extension to the most recent version of the Transport Layer Security (TLS 1.3) standard, encrypts information exchanged between devices and Content Delivery Networks, preventing network providers from being able to identify which websites users are trying to access. According to Corrata's findings, actual usage of ECH by enterprise mobile devices remains infrequent, with less than 0.01% of TLS connections employing the protocol. Nonetheless, more than 9% of the top one million domains are ECH-enabled, demonstrating some groundwork for future adoption. The report identified a notable risk associated with ECH adoption. Corrata's analysis revealed that 17% of ECH-enabled sites are classified as risky, indicating that malicious actors are already making use of the increased anonymity provided by the protocol. The risk is particularly acute for Chrome users who have encrypted DNS enabled. Corrata stated, "ECH could degrade, not improve, privacy: Banks and other regulated entities are often required to monitor the internet traffic going into and out of their organisation. To date, these enterprises have been able to selectively decrypt traffic without looking at sensitive data like employees' health records. But with ECH blocking their filtering, enterprises would have little choice but to decrypt all internet traffic for inspection, drastically degrading employees' privacy." The analysis highlighted the significant role played by Cloudflare in enabling ECH. Cloudflare is the only major Content Delivery Network supporting ECH, and almost all of the sites that have ECH enabled use its infrastructure. The report also noted that large website owners appear reluctant to adopt the protocol due to concerns that users may face blocks from security systems in enterprises or by public authorities. While internet service providers and enterprise security teams have reduced visibility under ECH, the protocol still allows CDNs like Cloudflare to access certain data. Malicious actors are leveraging this infrastructure to support phishing attacks, Corrata said. "Over 90% of phishing detections use Cloudflare infrastructure, according to Corrata's analysis. In addition to the anonymity provided by ECH, these sites take advantage of other Cloudflare features. For example, the "captcha" page can be used to direct desktop traffic to the legitimate site while mobile traffic is sent to the fake one. Alternatively, traffic not coming from the targeted country may be redirected to the legitimate site. These are deliberate tactics to avoid detection by security providers." The report also identified several barriers to widespread adoption of ECH. While 20% of devices are configured to use encrypted DNS and DNS resolvers that support ECH, the absence of support from browsers such as Safari and operating systems like Android hampers wider implementation. The adoption of ECH requires the participation of multiple industry stakeholders, each with different priorities. Matthieu Bentot, Chief Technology Officer of Corrata, commented on the current state of adoption. "The extremely low level of ECH adoption suggests that the security community's fears that enterprise internet traffic would go dark are not yet being realised. While the potential certainly exists for ECH to become a thorn in the side of defenders, the early signs are that this is the time to prepare rather than panic." The findings from the Living With ECH Report are based on Corrata's analysis of billions of connections made by devices running the company's mobile threat detection and response solution. The data reflects traffic from both iOS and Android devices, with Corrata tracking successful ECH connections between January and March 2025 by analysing DNS queries and TLS connection metadata.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store