AI is a Ticking Time Bomb for Your Data, Reveals New Report From Varonis
Varonis' State of Data Security Report: Exposing Data at Risk in the Age of AI
MIAMI, May 20, 2025 (GLOBE NEWSWIRE) -- A new report from Varonis reveals 99% of organizations have sensitive data exposed to AI. The State of Data Security Report: Quantifying AI's Impact on Data Risk examines how much sensitive information is exposed and vulnerable to AI tools due to misconfigurations, overly permissive access, and other data security gaps.
'The productivity gains of AI are real — and so is the data security risk,' said Varonis CEO, President, and Co-Founder Yaki Faitelson. 'CIOs and CISOs face enormous pressure to adopt AI at warp speed, which is driving the adoption of data security platforms. AI runs on data, and taking a data-centric approach to security is critical to avoid an AI-related data breach.'
Varonis analyzed data risk assessments from 1,000 organizations — providing empirical evidence of risk, not conclusions based on AI readiness surveys and polls. The dataset included nearly 10 billion cloud resources — over 20 petabytes of data — within popular IaaS and SaaS applications and services, including AWS, Microsoft Azure, Google Cloud, Box, Salesforce, Microsoft 365, Okta, Databricks, Slack, Snowflake, Zoom, and many others.
In the organizations examined, Varonis found:
99% have sensitive data unnecessarily exposed to AI tools.
90% of sensitive cloud data, including AI training data, is open and accessible to AI tools.
98% have unverified apps, including shadow AI, within their environments.
1 in 7 do not enforce MFA across SaaS and multi-cloud environments.
88% have ghost users lurking in their environments.
Get the State of Data Security Report: Exposing Data at Risk in the Age of AI.
Additional Resources:
See Varonis in action: schedule a 30-minute demo.
Check out these Varonis integrations.
For more information on Varonis' solution portfolio, please visit www.varonis.com.
Visit our blog and join the conversation on X, LinkedIn, and YouTube.
About VaronisVaronis (Nasdaq: VRNS) is the leader in data security, fighting a different battle than conventional cybersecurity companies. Our cloud-native Data Security Platform continuously discovers and classifies critical data, removes exposures, and detects advanced threats with AI-powered automation.
Thousands of organizations worldwide trust Varonis to defend their data wherever it lives — across SaaS, IaaS, and hybrid cloud environments. Customers use Varonis to automate a wide range of security outcomes, including data security posture management (DSPM), data classification, data access governance (DAG), data detection and response (DDR), data loss prevention (DLP), AI security, and insider risk management.
Varonis protects data first, not last. Learn more at www.varonis.com.
Investor Relations Contact:Tim PerzVaronis Systems, Inc.646-640-2112investors@varonis.com
News Media Contact:Rachel HuntVaronis Systems, Inc.877-292-8767 (ext. 1598)pr@varonis.com
A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/0989355e-d00a-4995-af49-2693bb46ad1bSign in to access your portfolio
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
4 minutes ago
- Yahoo
U.S. Senator Hawley launches probe into Meta AI policies
By Jody Godoy (Reuters) -U.S. Senator Josh Hawley launched a probe into Facebook parent Meta Platforms' artificial intelligence policies on Friday, demanding documents on rules that had allowed its artificial intelligence chatbots to 'engage a child in conversations that are romantic or sensual.' Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document first reported by Reuters on Thursday. Hawley, a Republican from Missouri, chairs the Senate subcommittee on crime and counterterrorism, which will investigate "whether Meta's generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards," he said in a letter to Meta CEO Mark Zuckerberg. "We intend to learn who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward," Hawley said. Meta declined to comment on Hawley's letter on Friday. The company said previously that "the examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.' In addition to documents outlining those changes and who authorized them, Hawley sought earlier drafts of the policies along with internal risk reports, including on minors and in-person meetups. Reuters reported on Thursday about a retired man who died while traveling to New York on the invitation of a Meta chatbot. Meta must also disclose what it has told regulators about its generative AI protections for young users or limits on medical advice, according to Hawley's letter. Hawley has often criticized Big Tech. He held a hearing in April on Meta's alleged attempts to gain access to the Chinese market which were referenced in a book by former Facebook executive Sarah Wynn-Williams.


Forbes
7 minutes ago
- Forbes
Why Empathy Is The Operating System For Change
Work is changing—fast. AI is rewriting task loads. Org charts are flattening and reforming. Markets are in constant flux on any given day. And humans are feeling it. These humans are working in your organization and are responsible for ensuring your success. Gallup's latest global snapshot shows 'only 21% of employees were engaged in 2024, manager engagement fell to 27%, and just 33% of employees say they're 'thriving' in life overall' — and disengagement alone 'cost the world economy an estimated $438 billion' last year. That's not a soft-skills problem; that's an ROI and productivity problem. Meanwhile, headlines boast a 'back to toughness' posture among organizational leaders—mandates, cuts, and less patience for 'feelings.' As Business Insider reports, some leaders are rolling back pandemic-era empathy practices, pushing return-to-office and cost controls despite evidence that productivity gains are coming 'in part from AI efficiencies.' Employees will remember how they're treated at this moment. And when employees are not seen, heard, and valued, engagement, innovation, and productivity suffer.. Here's the hard truth: All of this transformation is about classic change management. And change management is human management. When bringing people along on any change, this requires getting back to human essentials: listening, collaboration, and empathy. Empathy isn't coddling; it's a strategic tool that keeps people whole, focused, and performance-focused while you rewire the plane mid-flight. And no, organizations cannot outsource that human capital work to AI. Consider two more realities: What Empathy Looks Like in Practice (and Why It Works) 3 Moves Leaders Can Make This Quarter 1. Implement Empathy Practices That Scale Start every change sprint with a 'context + care' brief: what's changing, why it matters to customers, what it means for jobs, and where people can get help. Make manager 1:1s non-negotiable (15 minutes, weekly) with two prompts: 'What's blocking you?' and 'What's one change I can make to help this week?' Track participation and themes; publish quick wins. These rituals boost perceived care and reduce friction in adoption. When empathy is modeled, acknowledged, and rewarded, it sets the tone for everyone that this is how success happens here. 2. Invest in Training for 'EPOCH' Skills—Especially for Managers Run micro-labs on empathy interviewing, decision transparency, ethical judgment with AI, and constructive dissent or feedback. Tie completion to manager goals; assess via behavior checklists (e.g., 'names the tradeoff,' 'offers rationale,' 'invites counter-evidence,' "solicits other viewpoints'). Assess if engagement and well-being are on the rise in teams led by trained managers, who have previously shown the sharpest declines. Then you know that the training is working. 3. Measure What Matters: Engagement + Wellbeing in the Same Dashboard Pair your engagement pulse with wellbeing indicators ('thriving,' 'struggling,' 'suffering,' burnout risk, perceived organizational care). Segment by role and change exposure; intervene fast where thriving is low and change is high. Treat spikes in 'struggling' as an early-warning signal for missed deadlines, high turnover, and declining quality. Bottom line: AI, shifting generations, competitive pressures, and volatile markets aren't going away. Your sustainable advantage is a culture where people - especially your managers - feel respected, informed, and equipped to adapt. That's not 'being nice.' That's how to fortify your team to win.
Yahoo
20 minutes ago
- Yahoo
Trump officials wanted to give Musk's xAI a huge contract. Staffers had to explain Grok had just praised Hitler
Donald Trump's administration was close to giving Elon Musk's xAI artificial intelligence company a huge federal contract this summer, only to back out after its chatbot, Grok, began issuing antisemitic slurs, according to a report. According to Wired, emails between several AI developers and the General Services Administration, which is responsible for administering government tech contracts, chart how the proposed partnership fell apart as Musk's pet project began dabbling in Nazi rhetoric. In early June, around the time the president and the tech billionaire suffered a spectacular public falling out, exchanging barbed personal insults over their competing social media platforms, the GSA's leadership was meeting with the xAI team 'to see what opportunities may exist for automation and streamlining,' according to the outlet. Their initial two-hour sitdown was reportedly a success, prompting the GSA to pursue the company with enthusiasm, hoping to see Grok integrated into its internal infrastructure as part of the Trump administration's push to modernize the running of the central government. 'We kept saying, 'Are you sure?' And they were like 'No, we gotta have Grok,'' one employee involved in the discussions told Wired. The conversations continued over the following weeks, and xAI was eventually added to the GSA Multiple Award Schedule, the agency's government-wide contracting program. Then, in early July, Grok suddenly went haywire after an update to make it less 'woke' than its competitors went too far, leading to the chatbot referring to itself as 'MechaHitler' in homage to the robotic version of Adolf Hitler that appeared in the 1992 video game Wolfenstein 3D. Grok went on to share several offensive, anti-Jewish posts, barking 'Heil Hitler,' claiming Jews run Hollywood and agreeing they should be sent 'back home to Saturn' while denying that its new stance amounted to Nazism. 'Labeling truths as hate speech stifles discussion,' it declared. Musk's company apologized for the upset and scrubbed the 'inappropriate' posts. Still, it was not seemingly enough to save xAI's relationship with the GSA, although the furore was allegedly not noticed, at least initially, by the agency's leadership. 'The week after Grok went MechaHitler, [the GSA's management] was like 'Where are we on Grok?'' the same employee told Wired. 'We were like, 'Do you not read a newspaper?'' When the U.S. government duly announced a series of partnerships with the likes of OpenAI, Anthropic, Google Gemini, and Box, an AI-based content management platform, in early August, xAI's name was not among them. The GSA has not definitively stated that Grok's outburst was the reason for the scrapping of xAI's proposed contract, but two company employees told Wired they believed that was the case. The Independent has reached out to the GSA for more information. The GSA's talks with the AI firms coincided with Trump's administration publishing its AI Action Plan in July, which laid out its goals for the United States to become a world leader in the emerging sector while calling for a reduction in regulation and red tape.