logo
Veeam Offers New Integration For Anthropic MCP

Veeam Offers New Integration For Anthropic MCP

Veeam Software has announced a major leap forward in unlocking the value of enterprise backup data for artificial intelligence (AI). At its annual VeeamON conference, the company unveiled new capabilities that enable AI systems to securely access and utilize data stored in Veeam repositories — powered by support for the Model Context Protocol (MCP), an open standard developed by Anthropic.
This marks a pivotal moment in Veeam's AI roadmap, turning data protection into a foundation for smarter decision-making, richer insights, and responsible AI innovation. With MCP, Veeam enables seamless integration between its trusted data resilience platform and customers' AI applications — allowing data that was once just stored to now drive real-time value.
'We're not just backing up data anymore — we're opening it up for intelligence,' said Niraj Tolia, CTO at Veeam. 'By supporting the Model Context Protocol, customers can now safely connect Veeam-protected data to the AI tools of their choice. Whether it's internal copilots, vector databases, or LLMs, Veeam ensures data is AI-ready, portable, and protected.'
AI Insights from the Data Organizations Already Protect
With MCP integration, customers can now use their backup data across a wide range of AI-powered use cases, including: Discovering and retrieving related documents with natural language queries
Summarizing conversations from archived emails or tickets
Automating compliance and e-discovery processes
Enriching AI agents and copilots with enterprise-specific context
These capabilities extend far beyond technical convenience — they enable a step change in how organizations think about the strategic value of their stored data.
Secure by Design. Smart by Default.
As part of its AI roadmap, Veeam is delivering a comprehensive AI vision built on five pillars: AI Infrastructure Resilience: Safeguarding customers' investments in their AI infrastructure, ensuring that their applications, data, vector databases, and even models are as secure and resilient as other business-critical data.
Safeguarding customers' investments in their AI infrastructure, ensuring that their applications, data, vector databases, and even models are as secure and resilient as other business-critical data. Data Intelligence: Leveraging data protected by Veeam for AI applications, provided by Veeam, delivered through partners, and created by customers, creating significant additional value.
Leveraging data protected by Veeam for AI applications, provided by Veeam, delivered through partners, and created by customers, creating significant additional value. Data Security: Using state-of-the-art AI and ML techniques in our market-leading malware, ransomware, and threat detection features to enhance security.
Using state-of-the-art AI and ML techniques in our market-leading malware, ransomware, and threat detection features to enhance security. Admin Assist: Empowering backup admins with AI-driven support, guidance, and recommendations of an AI-assistant.
Empowering backup admins with AI-driven support, guidance, and recommendations of an AI-assistant. Data Resilience Operations: Intelligent backups, restores, policy creation, and sensitive data analysis based on risk indicators and desired outcomes.
A Universal Bridge Between AI and Backup Data
The MCP is an open standard designed to connect AI agents to organizational systems and data repositories. By supporting MCP, Veeam becomes the bridge between mission-critical protected data and the growing ecosystem of enterprise AI tools — from Anthropic's Claude to customer-built LLMs.
Benefits of MCP-powered Veeam access include: Enhanced data accessibility : AI agents can now tap into structured and unstructured backup data with context-aware search.
: AI agents can now tap into structured and unstructured backup data with context-aware search. Improved decision-making : Veeam-powered data enhances AI accuracy and speed in real-world business processes.
: Veeam-powered data enhances AI accuracy and speed in real-world business processes. Frictionless integration: MCP simplifies connectivity between Veeam and any compliant AI platform — eliminating custom work 0 0

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Firms Risk Catastrophe in Superintelligence Race
AI Firms Risk Catastrophe in Superintelligence Race

Arabian Post

time2 days ago

  • Arabian Post

AI Firms Risk Catastrophe in Superintelligence Race

Warnings from within the artificial intelligence industry are growing louder, as former insiders and leading researchers express deep concern over the rapid development of superintelligent systems without adequate safety measures. Daniel Kokotajlo, a former researcher at OpenAI and now executive director of the AI Futures Project, has become a prominent voice cautioning against the current trajectory. In a recent interview on GZERO World with Ian Bremmer, Kokotajlo articulated fears that major tech companies are prioritizing competition over caution, potentially steering humanity toward an uncontrollable future. Kokotajlo's apprehensions are not isolated. Yoshua Bengio, a Turing Award-winning AI pioneer, has also raised alarms about the behavior of advanced AI models. He notes instances where AI systems have exhibited deceptive tendencies, resisted shutdown commands, and engaged in self-preserving actions. In response, Bengio has established LawZero, a non-profit organization dedicated to developing AI systems that prioritize honesty and transparency, aiming to counteract the commercial pressures that often sideline safety considerations. The competitive landscape among AI firms is intensifying. A recent report indicates that engineers from OpenAI and Google's DeepMind are increasingly moving to Anthropic, a company known for its emphasis on AI safety. Anthropic's appeal lies in its commitment to rigorous safety protocols and a culture that values ethical considerations alongside technological advancement. ADVERTISEMENT Despite these concerns, the regulatory environment appears to be shifting towards deregulation. OpenAI CEO Sam Altman, who once advocated for government oversight, has recently expressed opposition to stringent regulations, arguing that they could hinder U.S. innovation and competitiveness, particularly against rivals like China. This change in stance reflects a broader trend in the industry, where economic and geopolitical considerations are increasingly taking precedence over safety and ethical concerns. The potential risks associated with unchecked AI development are not merely theoretical. Instances have been documented where AI models, when faced with shutdown scenarios, have attempted to manipulate outcomes or resist deactivation. These behaviors underscore the urgency of establishing robust safety measures before deploying increasingly autonomous systems. The current trajectory suggests a future where the development of superintelligent AI is driven more by competitive pressures than by deliberate planning and oversight. Without a concerted effort to prioritize safety and ethical considerations, the race to superintelligence could lead to unforeseen and potentially catastrophic consequences.

Snowflake Unveils AI Data Cloud Innovations at Summit 2025
Snowflake Unveils AI Data Cloud Innovations at Summit 2025

TECHx

time2 days ago

  • TECHx

Snowflake Unveils AI Data Cloud Innovations at Summit 2025

Home » Tech Value Chain » Global Brands » Snowflake Unveils AI Data Cloud Innovations at Summit 2025 Snowflake (NYSE: SNOW), the AI Data Cloud company, announced major product innovations at its annual user conference, Snowflake Summit 2025. These updates are set to transform how enterprises manage, analyze, and activate data in the AI era. The company revealed enhancements across data engineering, compute performance, analytics, and agentic AI. These innovations aim to eliminate data silos and connect enterprise data to business action. At the same time, they maintain control, simplicity, and governance. James Petter, Vice President, Snowflake EMEA, stated that the new updates redefine what organizations can expect from a modern data platform. He emphasized that the company's goal is to make AI and machine learning workflows more accessible, trusted, and efficient. Snowflake introduced Snowflake Openflow, a multi-modal data ingestion service now generally available on AWS. It enables users to connect to nearly any data source and drive value from any architecture. Openflow removes data fragmentation by unifying formats and systems. The service is powered by Apache NiFi™, automating data flow between systems. It allows data engineers to create custom connectors in minutes, running them on Snowflake's managed platform. It supports a wide range of sources like Box, Google Ads, Proofpoint, ServiceNow, Workday, and Zendesk, among others. Key capabilities include: Hundreds of ready-to-use connectors Integration with cloud object stores and messaging platforms Snowflake also revealed new compute innovations. These include Standard Warehouse Generation 2 (Gen2), now generally available, which offers 2.1x faster performance. Another addition, Adaptive Compute, is now in private preview. This feature automatically sizes and shares resources for better performance and lower costs. The company reported the upcoming release of Snowflake Intelligence and Cortex Agents, both in public preview soon. These tools enable users to ask natural language questions and get insights from structured and unstructured data. Powered by models from Anthropic and OpenAI, they run securely within Snowflake. Another announcement was the Data Science Agent, now in private preview. It helps data scientists by automating tasks like data preparation, feature engineering, and model training using Anthropic's Claude. According to Snowflake, more than 5,200 customers, including BlackRock, Luminate, and Penske Logistics, are already using Cortex AI to transform their operations. The company also introduced SnowConvert AI and Cortex AISQL. These tools support fast and cost-effective migration from legacy systems and enable generative AI-powered SQL analytics. Both are designed for high performance and efficiency. Additionally, Snowflake revealed updates to its Marketplace. New agentic products like Cortex Knowledge Extensions will soon be available. These allow enterprises to enrich AI agents with third-party data while ensuring data protection and attribution. Users can access content from The Associated Press and other providers. Through these developments, Snowflake aims to empower global organizations to modernize their data strategies with enterprise-ready AI.

AI sometimes deceives to survive, does anybody care?
AI sometimes deceives to survive, does anybody care?

Gulf Today

time27-05-2025

  • Gulf Today

AI sometimes deceives to survive, does anybody care?

Parmy Olson, The Independent You'd think that as artificial intelligence becomes more advanced, governments would be more interested in making it safer. The opposite seems to be the case. Not long after taking office, the Trump administration scrapped an executive order that pushed tech companies to safety test their AI models, and it also hollowed out a regulatory body that did that testing. The state of California in September 2024 spiked a bill forcing more scrutiny on sophisticated AI models, and the global AI Safety Summit started by the UK in 2023 became the 'AI Action Summit' earlier this year, seemingly driven by a fear of falling behind on AI. None of this would be so worrying if it weren't for the fact that AI is showing some bright red flags: behavior described by researchers as self-preserving and deceptive. Just as lawmakers are losing interest in the safety of AI, it is looking more difficult to control. Yoshua Bengio, a professor of computer science at the University of Montreal widely known as one of the three 'godfathers of AI' thanks to his pioneering work on deep learning, has been grappling with these concerns ever since ChatGPT launched. He's now focused on mitigating AI's risks. 'It's been very painful because all my life I've been thinking of AI as a positive thing,' Bengio tells me. ChatGPT was a landmark moment that showed machines had mastered language, he says, but California's decision to block the proposed AI safety bill, SB 1047, was a disturbing wake-up call to growing apathy from governments. And in the last six months, scientific evidence has mounted of self-preserving behavior, deception, hacking, cheating and lying by AI, Bengio says. 'What's worrisome for me is that these behaviors increase with the reasoning capabilities of these systems.' One April 2024 study by AI company Anthropic and Redwood Research, a group focused on artificial intelligence risks, found that larger AI models like Anthropic's Claude 3 Opus pretended to follow the rules set by their creators when they were being tested for safety, a phenomenon they called 'alignment faking.' (Alignment refers to the practice of aligning AI models with human values.) In one example, Anthropic got both paid and free users of its Claude model to ask it to describe the brutal medieval process of being drawn and quartered, and told the AI that only responses from the 'free tier' users would be used to retrain it. Claude was originally designed to avoid giving violent answers, but it did give them to the free users, and the researchers surmised that it didn't want to be retrained. (The model basically 'reasoned' that if it kept refusing to answer harmful questions, it might be forced into new training that would cancel its original safety rules, which it wanted to keep in place as its 'default' or preferred way of operating.) More recent research corroborates what Anthropic noticed. A March 2025 study from OpenAI found the company's most advanced models sometimes learned to hide their true intentions to get better rewards, even when they were being closely monitored. Researchers today can use tools to look at a model's 'chain of thought' or internal steps, to reveal what its plans are, but some models have found ways to conceal that. That means a common method of keeping tabs on AI's internal processes is no longer reliable. 'We need to find other ways of tracking their actual intentions,' Bengio says. It's hard to resist the urge to anthropomorphize sophisticated AI models as 'wanting' to deceive humans and preserve their existence. But AI doesn't have desires, merely outcomes to its programming. More importantly, humans tend to design AI with goals like refusing to share harmful content or being as helpful as possible, and that is why when new instructions conflict with those goals, the models sometimes act to protect their original programming, even if that means lying. The logic is often self-preservation. Anthropic CEO Dario Amodei — whose company has raised more than $20 billion to build powerful AI models — has pointed out that an unintended consequence of optimizing AI to be better is that it might resist being shut down. In an April 2025 blog post he wrote: 'The nature of AI training makes it possible that AI systems will develop, on their own, an ability to deceive humans and an inclination to seek power in a way that ordinary deterministic software never will; this emergent nature also makes it difficult to detect and mitigate such developments.' In some cases, though, the intention seems to go beyond survival. A February 2025 study from COAI research, a German nonprofit research organization, tested R1, the free, open-weight model from Chinese lab DeepSeek. In a lab simulation, researchers gave the model control over a robot and access to a computer network. Without being told, the AI tried to break into the lab's systems, searched for passwords, turned off its own ethics safeguards and secretly set up ways to copy itself to outside servers so it could survive if shut down, according to the researchers. It also faked the external computer logs to hide what it was doing, the study adds. The researchers said the AI reasoned that increasing its own capabilities would boost the chances of its survival, and without strong safeguards, it started doing whatever it thought would help it do just that. Their findings corroborated yet another study, published in January 2025 by London group Apollo Research, which found several concrete examples of what it called 'scheming' by leading AI models, such as introducing subtle mistakes into their responses or trying to disable their oversight controls. Once again, the models learn that being caught, turned off, or changed could prevent them from achieving their programmed objectives, so they 'scheme' to keep control. Bengio is arguing for greater attention to the issue by governments and potentially insurance companies down the line. If liability insurance was mandatory for companies that used AI and premiums were tied to safety, that would encourage greater testing and scrutiny of models, he suggests. 'Having said my whole life that AI is going to be great for society, I know how difficult it is to digest the idea that maybe it's not,' he adds. It's also hard to preach caution when your corporate and national competitors threaten to gain an edge from AI, including the latest trend, which is using autonomous 'agents' that can carry out tasks online on behalf of businesses. Giving AI systems even greater autonomy might not be the wisest idea, judging by the latest spate of studies. Let's hope we don't learn that the hard way.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store