logo
Varonis unveils AI Shield to defend sensitive data in real time

Varonis unveils AI Shield to defend sensitive data in real time

Techday NZ29-04-2025

Varonis has introduced AI Shield, a product designed to provide persistent AI risk defence for data security by monitoring and remediating data exposure in real time.
AI Shield operates by continuously analysing an organisation's AI security environment, observing interactions between AI systems and data, and dynamically regulating data permissions to prevent the exposure of sensitive information resulting from inadequate data security practices.
The solution leverages Varonis' patented permissions analysis algorithms, which take into account factors such as data sensitivity, data staleness, user profiles, and more, to determine contextually which data should be restricted from AI access. This approach is intended to protect organisations even when they have not properly configured access controls.
David Bass, Executive Vice President of Engineering and Chief Technology Officer at Varonis, highlighted the complexity introduced by the prevalence of AI in data security. He said, "AI makes the data security challenge much more urgent and complex. AI Shield gives our customers the confidence to deploy AI with both preventative and detective controls that require zero setup and maintenance. It's always on, always learning, and always working for you behind the scenes to prevent breaches and compliance violations."
AI Shield provides several features aimed at ensuring secure AI usage, including real-time risk analysis to pinpoint the sensitive data exposed to AI, automated risk remediation that continually addresses data exposure at scale, and behaviour-based threat detection to identify abnormal or malicious actions. The system also offers round-the-clock alert response to investigate, contain, and stop data threats.
According to the company, organisations with poor data security posture face the risk of employees or AI systems potentially accessing large volumes of data that were not intended for their access. Varonis states that its AI Shield is designed to address this issue by offering constant monitoring and intervention capabilities.
The company emphasises that protecting data access by AI tools is now a central component of broader data security measures. The product aims to help employees safely interact with AI tools, ensuring that only authorised users and AI agents have access, while all activity is monitored and any suspicious behaviour is detected and flagged.
Varonis positions AI security as inseparable from data security, asserting that solutions like AI Shield are required to address the increasing complexity of protecting sensitive data in environments where AI-driven processes are regularly utilised.
Follow us on:
Share on:

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Agentic AI adoption rises in ANZ as firms boost security spend
Agentic AI adoption rises in ANZ as firms boost security spend

Techday NZ

time5 hours ago

  • Techday NZ

Agentic AI adoption rises in ANZ as firms boost security spend

New research from Salesforce has revealed that all surveyed IT security leaders in Australia and New Zealand (ANZ) believe that agentic artificial intelligence (AI) can help address at least one digital security concern within their organisations. According to the State of IT report, the deployment of AI agents in security operations is already underway, with 36 per cent of security teams in the region currently using agentic AI tools in daily activities—a figure projected to nearly double to 68 per cent over the next two years. This surge in AI adoption is accompanied by rising investment, as 71 per cent of ANZ organisations plan to increase their security budgets in the coming year. While slightly lower than the global average (75 per cent), this signals a clear intent within the region to harness AI for strengthening cyber defences. AI agents are being relied upon for tasks ranging from faster threat detection and investigation to sophisticated auditing of AI model performance. Alice Steinglass, Executive Vice President and General Manager of Salesforce's Platform, Integration, and Automation division, said, "Trusted AI agents are built on trusted data. IT security teams that prioritise data governance will be able to augment their security capabilities with agents while protecting data and staying compliant." The report also highlights industry-wide optimism about AI's potential to improve security but notes hurdles in implementation. Globally, 75 per cent of surveyed leaders recognise their security practices need transformation, yet 58 per cent are concerned their organisation's data infrastructure is not yet capable of supporting AI agents to their full potential. As both defenders and threat actors add AI to their arsenals, the risk landscape is evolving. Alongside well-known risks such as cloud security threats, malware, and phishing attacks, data poisoning has emerged as a new top concern. Data poisoning involves malicious actors corrupting AI training data sets to subvert AI model behaviour. This, together with insider threats and cloud risks, underscores the need for robust data governance and infrastructure. Across the technology sector, the expanding use of AI agents is rapidly reshaping industry operations. Harsha Angeri, Vice President of Corporate Strategy and Head of AI Business at Subex, noted that AI agents equipped with large language models (LLMs) are already impacting fraud detection, business support systems (BSS), and operations support systems (OSS) in telecommunications. "We are seeing opportunities for fraud investigation using AI agents, with great interest from top telcos," Angeri commented, suggesting this development is altering longstanding approaches to software and systems architecture in the sector. The potential of agentic AI extends beyond security and fraud prevention. Angeri highlighted the emergence of the "Intent-driven Network", where user intent is seamlessly translated into desired actions by AI agents. In future mobile networks, customers might simply express their intentions—like planning a family holiday—and rely on AI-driven networks to autonomously execute tasks, from booking arrangements to prioritising network resources for complex undertakings such as drone data transfers. This approach coins the term "Intent-Net", promising hyper-personalisation and real-time orchestration of digital services. The rapid penetration of AI chips in mobile devices also signals the mainstreaming of agentic AI. Angeri stated that while only about 4 to 5 per cent of smartphones had AI chips in 2023, this figure has grown to roughly 16 per cent and is expected to reach 50 per cent by 2028, indicating widespread adoption of AI-driven mobile services. However, industry experts caution that agentic AI comes with considerable technical and operational challenges. Yuriy Yuzifovich, Chief Technology Officer for AI at GlobalLogic, described how agentic AI systems, driven by large language models, differ fundamentally from classical automated systems. "Their stochastic behaviour, computational irreducibility, and lack of separation between code and data create unique obstacles that make designing resilient AI agents uniquely challenging," he said. Unlike traditional control systems where outcomes can be rigorously modelled and predicted, AI agents require full execution to determine behaviour, often leading to unpredictable outputs. Yuzifovich recommended that enterprises adopt several key strategies to address these challenges: using domain-specific languages to ensure reliable outputs, combining deterministic classical AI with generative approaches, ensuring human oversight for critical decisions, and designing with modularity and extensive observability for traceability and compliance. "By understanding the limitations and potentials of each approach, we can design agentic systems that are not only powerful but also safe, reliable, and aligned with human values," he added. As businesses across sectors embrace agentic AI, the next years will test the ability of enterprises and technology vendors to balance innovation with trust, resilience, and security. With rapid advancements in AI agent deployment, the industry faces both the opportunity to transform digital operations and the imperative to manage the associated risks responsibly.

Economists Urge Action To Prevent ‘AI Poverty Traps'
Economists Urge Action To Prevent ‘AI Poverty Traps'

Scoop

time7 hours ago

  • Scoop

Economists Urge Action To Prevent ‘AI Poverty Traps'

Press Release – University of Auckland The economists argue that in developing AI policies, the international community must learn from the successes and failures of foreign aid. Artificial intelligence could deepen inequality and create 'AI-poverty traps' in developing nations, write economists Dr Asha Sundaram and Dr Dennis Wesselbaum in their paper 'Economic development reloaded: the AI revolution in developing nations'. Sundaram, an associate professor at the University of Auckland Business School, and Wesselbaum, an associate professor at the University of Otago, say developing countries lack the necessary infrastructure and skilled labour force to capitalise on AI's potential. 'The downside is that there isn't a lot of capacity in some countries in terms of digital infrastructure, internet, mobile phone penetration,' says Sundaram. 'Much of the technology is controlled by firms like Google and OpenAI, raising the risk of over-reliance on foreign tech, potentially stifling local innovation.' Without strategic interventions, Wesselbaum says AI may create an 'AI-poverty trap': locking developing nations into technological dependence and widening the gap between global economies. 'For developing countries, AI could be a game-changer; boosting productivity, expanding access to essential services, and fostering local innovation – if the right infrastructure and skills are in place.' Financial support from developed countries and international bodies like the UN could help cover upfront costs through grants, loans and investment incentives, according to the research. 'We also need robust legal and regulatory frameworks to support responsible AI by addressing data privacy, ethics, and transparency concerns,' says Sundaram. The economists argue that in developing AI policies, the international community must learn from the successes and failures of foreign aid. 'Aid has often failed to spur lasting growth in developing countries,' says Sundaram, 'partly because it can create dependency, reducing self-reliance and domestic initiatives.' She highlights a need for policies to mitigate the downsides of AI, both in developed and developing countries. Such policies could include an international tax regime that would allow countries to capture tax revenue from economic activities driven by AI inside their borders. Sundaram's involved in one such project in Ethiopia where artificial intelligence is being harnessed by the government and the country's largest telecom provider to support small businesses excluded from formal banking due to lack of collateral. By analysing mobile money transactions and how much these businesses pay and receive, algorithms estimate how much credit can safely be offered, enabling small loans and helping integrate marginalised enterprises into the formal economy. Artificial intelligence holds the power to transform development trajectories, but without targeted investments and inclusive policies, says Wesselbaum, it risks deepening the digital divide and entrenching global inequality.

Economists Urge Action To Prevent ‘AI Poverty Traps'
Economists Urge Action To Prevent ‘AI Poverty Traps'

Scoop

time7 hours ago

  • Scoop

Economists Urge Action To Prevent ‘AI Poverty Traps'

Artificial intelligence could deepen inequality and create 'AI-poverty traps' in developing nations, write economists Dr Asha Sundaram and Dr Dennis Wesselbaum in their paper 'Economic development reloaded: the AI revolution in developing nations'. Sundaram, an associate professor at the University of Auckland Business School, and Wesselbaum, an associate professor at the University of Otago, say developing countries lack the necessary infrastructure and skilled labour force to capitalise on AI's potential. "The downside is that there isn't a lot of capacity in some countries in terms of digital infrastructure, internet, mobile phone penetration," says Sundaram. "Much of the technology is controlled by firms like Google and OpenAI, raising the risk of over-reliance on foreign tech, potentially stifling local innovation." Without strategic interventions, Wesselbaum says AI may create an 'AI-poverty trap': locking developing nations into technological dependence and widening the gap between global economies. 'For developing countries, AI could be a game-changer; boosting productivity, expanding access to essential services, and fostering local innovation – if the right infrastructure and skills are in place.' Financial support from developed countries and international bodies like the UN could help cover upfront costs through grants, loans and investment incentives, according to the research. 'We also need robust legal and regulatory frameworks to support responsible AI by addressing data privacy, ethics, and transparency concerns,' says Sundaram. The economists argue that in developing AI policies, the international community must learn from the successes and failures of foreign aid. "Aid has often failed to spur lasting growth in developing countries,' says Sundaram, 'partly because it can create dependency, reducing self-reliance and domestic initiatives." She highlights a need for policies to mitigate the downsides of AI, both in developed and developing countries. Such policies could include an international tax regime that would allow countries to capture tax revenue from economic activities driven by AI inside their borders. Sundaram's involved in one such project in Ethiopia where artificial intelligence is being harnessed by the government and the country's largest telecom provider to support small businesses excluded from formal banking due to lack of collateral. By analysing mobile money transactions and how much these businesses pay and receive, algorithms estimate how much credit can safely be offered, enabling small loans and helping integrate marginalised enterprises into the formal economy. Artificial intelligence holds the power to transform development trajectories, but without targeted investments and inclusive policies, says Wesselbaum, it risks deepening the digital divide and entrenching global inequality.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store