logo
#

Latest news with #AIDataSecurityandComplianceRiskSurvey

Most firms overestimate AI governance as privacy risks surge
Most firms overestimate AI governance as privacy risks surge

Techday NZ

time7 days ago

  • Business
  • Techday NZ

Most firms overestimate AI governance as privacy risks surge

Kiteworks has released its AI Data Security and Compliance Risk Survey, highlighting gaps between AI adoption and governance maturity in the Asia-Pacific (APAC) region and globally. The survey, based on responses from 461 cybersecurity, IT, risk management, and compliance professionals, reveals that only 17% of organisations have implemented technical controls that block access to public AI tools alongside data loss prevention (DLP) scanning. Despite this, 26% of respondents state that over 30% of the data employees input into public AI tools is private, and 27% confirm this figure specifically for the APAC region. These findings appear against a backdrop of rising incidents; Stanford's 2025 AI Index Report recorded a 56.4% year-on-year increase in AI privacy incidents, totalling 233 last year. According to the Kiteworks survey, only 40% of organisations restrict AI tool usage via training and audits, 20% rely solely on warnings without monitoring, and 13% lack any specific policies, leaving many exposed to data privacy risks. A disconnect between adoption and controls "Our research reveals a fundamental disconnect between AI adoption and security implementation," said Tim Freestone, Chief Strategy Officer at Kiteworks. "When only 17% have technical blocking controls with DLP scanning, we're witnessing systemic governance failure. The fact that Google reports 44% of zero-day attacks target data exchange systems undermines the very systems organisations rely on for protection." The survey indicates a persistent overconfidence among organisations regarding their AI governance maturity. While 40% of respondents say they have fully implemented an AI governance framework, Gartner's data shows only 12% of organisations possess dedicated AI governance structures, with 55% lacking any frameworks. Deloitte's research further highlights this gap, showing just 9% achieve 'Ready' level governance maturity despite 23% considering themselves 'highly prepared'. This discrepancy is compounded by industry data indicating that 86% lack visibility into AI data flows. EY's recent study suggests that technology companies continue to deploy AI at a rapid pace, with 48% already using AI agents and 92% planning increased investment—a 10% rise since March 2024—with 'tremendous pressure' to justify returns, thereby elevating incentives to adopt AI quickly but at the expense of security. "The gap between self-reported capabilities and measured maturity represents a dangerous form of organisational blindness," explained Freestone. "When organisations claiming governance discover their tracking reveals significantly more risks than anticipated according to Deloitte, and when 91% have only basic or in-progress AI governance capabilities, this overconfidence multiplies risk exposure precisely when threats are escalating." Legal sector and policy awareness According to survey data, the legal sector exhibits heightened concern about data leakage, with 31% of legal professionals identifying it as a top risk. However, implementation lags are evident, with 15% lacking policies or controls for public AI use and 19% relying on unmonitored warnings. Only 23% of organisations overall have comprehensive privacy controls and regular audits before deploying AI systems. Within legal firms, 15% had no formal privacy controls but prioritised rapid AI uptake – an improvement over the 23% average across sectors, but still significant in a sector where risk mitigation is fundamental. Thomson Reuters figures support this, reporting that just 41% of law firms have AI-related policies, despite 95% foreseeing AI as central within five years. Security controls and data exposure in APAC APAC organisations closely mirror global patterns, with 40% relying on employee training and audits, 17% utilising technical controls with DLP scanning, and 20% issuing warnings with no enforcement. Meanwhile, 11% provide only guidelines, and 12% have no policy in place. This means that 83% lack automated controls, despite the APAC region's position at the forefront of the global AI market. The exposure of private data follows global trends: 27% report that more than 30% of AI-ingested data is private, 24% report a 6–15% exposure rate, and 15% are unaware of their exposure levels. A slight improvement in visibility is indicated, which may reflect regional technical expertise. For AI governance, 40% of APAC respondents claim thorough implementation, 41% say partial implementation, while 9% have no plans, and 3% are planning to implement controls. Regulatory complexity and cross-border risks APAC's position involves navigating a complex landscape of national regulations, including China's Personal Information Protection Law, Singapore's PDPA, Japan's APPI, Australia's Privacy Act reforms, India's draft Digital Personal Data Protection Act, and South Korea's PIPA. The survey highlights that a 60% visibility gap in AI data flows in the region is particularly challenging, given the region's diversity, which limits the ability to comply with data localisation, cross-border data transfer rules, and consent requirements. Weak controls in APAC expose organisations to difficulties in monitoring compliance with China's data localisation regulations, managing Singapore-Australia digital agreements, and knowing how AI tools route data through restricted jurisdictions. Organisational strategies and gaps Regarding privacy investment, 34% of organisations employ balanced approaches that involve data minimisation and the selective use of privacy-enhancing technologies. Some 23% have comprehensive controls and audits, while 10% maintain basic policies but focus on AI innovation, and another 10% address privacy only when required by law. Meanwhile, 23% have no formal privacy controls while prioritising rapid AI adoption. Kiteworks recommends that businesses recognise the overestimation of their governance maturity, deploy automated and verifiable controls for compliance, and prepare for increasing regulatory scrutiny by quantifying and addressing any exposure gaps. "The data reveals organisations significantly overestimate their AI governance maturity," concluded Freestone. "With incidents surging, zero-day attacks targeting the security infrastructure itself, and the vast majority lacking real visibility or control, the window for implementing meaningful protections is rapidly closing."

Kiteworks Survey Reveals Only 17% of Organizations Have Technical Controls for AI Data Security While Over One-Quarter Report High Private Data Exposure
Kiteworks Survey Reveals Only 17% of Organizations Have Technical Controls for AI Data Security While Over One-Quarter Report High Private Data Exposure

Mid East Info

time16-06-2025

  • Business
  • Mid East Info

Kiteworks Survey Reveals Only 17% of Organizations Have Technical Controls for AI Data Security While Over One-Quarter Report High Private Data Exposure

Organizations rush to adopt AI but fail to have commensurate security and compliance controls in place Kiteworks, which empowers organizations to effectively manage risk in every send, share, receive, and use of private data, today released findings from its AI Data Security and Compliance Risk Survey of 461 cybersecurity, IT, risk management, and compliance professionals. The survey, which was conducted by Centiment, reveals critical implementation failures: Only 17% of organizations have technical controls that block access to public AI tools combined with DLP scanning, while 26% report over 30% of data employees ingest in public AI tools is private data. These findings emerge amid a documented surge in AI-related incidents. Stanford's 2025 AI Index Report records a 56.4% year-over-year increase in AI privacy incidents, reaching 233 incidents last year.[1] The Kiteworks survey exposes how organizations remain unprepared: 40% restrict AI tool usage through training and audits, 20% rely solely on warnings without monitoring, and 13% lack any specific policies for public AI tool usage—leaving the vast majority vulnerable to emerging threats. 'Our research reveals a fundamental disconnect between AI adoption and security implementation,' said Tim Freestone, Chief Marketing Officer at Kiteworks. 'When only 17% have technical blocking controls with DLP scanning, we're witnessing systemic governance failure. The fact that Google reports 44% of zero-day attacks target data exchange systems undermines the very systems organizations rely on for protection.' Industry Benchmarks Reveal Dangerous Overconfidence Gap The Kiteworks survey exposes a critical overconfidence crisis in AI governance readiness. While one-third of survey respondents claim they have comprehensive governance controls and tracking in place, this contrasts starkly with Gartner's finding that only 12% of organizations have dedicated AI governance structures, with 55% lacking any framework whatsoever.[2] This dramatic gap between perception and reality creates unprecedented risk exposure. Deloitte's research provides even more sobering context: Only 9% of organizations achieve 'Ready' level AI governance maturity, despite 23% claiming to be 'highly prepared'—a 14-point overconfidence gap.[3] This misalignment is particularly concerning given that 86% of organizations lack visibility into AI data flows, according to industry research.[4] The rush to adopt AI without proper controls is accelerating. A recent EY survey found 48% of technology companies are already deploying AI agents, with 92% planning to increase AI spending—a 10% jump from March 2024.[5] Yet this enthusiasm comes with what EY calls 'tremendous pressure' to demonstrate ROI, creating incentives to prioritize speed over security. 'The gap between self-reported capabilities and measured maturity represents a dangerous form of organizational blindness,' explained Patrick Spencer, VP of Corporate Marketing and Research at Kiteworks. 'When organizations claiming governance discover their tracking reveals significantly more risks than anticipated according to Deloitte, and when 91% have only basic or in-progress AI governance capabilities, this overconfidence multiplies risk exposure precisely when threats are escalating.' Legal Sector Exemplifies Implementation-Awareness Gap The Kiteworks survey found legal professionals report the highest concern about data leakage at 31%, yet implementation remains weak: 15% have no specific policies or controls regarding the use of public AI tools with company data, while 19% rely on unmonitored warnings. This implementation gap becomes more pronounced in privacy investment strategies. While 23% of all organizations maintain comprehensive privacy controls with regular audits before any AI system deployment, only 15% of legal firms have fallen into the trap of having no formal privacy controls while prioritizing rapid AI adoption—an 8-point improvement over the 23% average across all sectors yet still concerning given their fiduciary duties. The disconnect aligns with Thomson Reuters data showing only 41% of law firms have AI policies despite 95% expecting AI to become central within five years.[6] This gap between current readiness and future expectations in the legal sector—an industry built on precedent and risk mitigation—exemplifies the broader organizational tendency to defer critical security implementations while embracing transformative technologies. AI Security Gap: When Perception Meets Reality The survey's finding that only 17% have implemented technical controls that block access to public AI tools combined with DLP scanning becomes more concerning given the evolving threat landscape. Google's research reveals 44% of zero-day vulnerabilities target data exchange systems, with 60% of enterprise-targeted zero days exploiting security and networking tools—the very systems meant to protect sensitive data. Despite awareness of risks, the Kiteworks survey found organizations remain deeply divided on addressing vulnerabilities: 34% report using a balanced approach with data minimization and selective privacy-enhancing technologies 23% maintain comprehensive privacy controls with regular audits 10% have basic privacy policies but prioritize AI innovation 10% address privacy concerns reactively, focusing on compliance only when legally required 23% have no formal privacy controls and prioritize rapid AI adoption Based on the convergence of weak controls, limited visibility, and escalating threats, organizations must: Acknowledge Reality: Recognize that self-assessed governance may significantly overstate actual maturity based on industry benchmarks Deploy Verifiable Controls: Implement automated governance tracking and controls that can demonstrate compliance, not just claim it Prepare for Regulatory Scrutiny: Quantify exposure gaps and implement measurable improvements 'The data reveals organizations significantly overestimate their AI governance maturity,' concluded Freestone. 'With incidents surging, zero-day attacks targeting the security infrastructure itself, and the vast majority lacking real visibility or control, the window for implementing meaningful protections is rapidly closing.' [1] 'The 2025 AI Index Report,' Stanford University, 2025. [2] 'AI Governance Frameworks for Responsible AI,' Gartner, March 20, 2023. [3] 'New Deloitte survey finds expectations for Gen AI remain high, but many are feeling pressure to quickly realize value while managing risks,' Deloitte, January 15, 2024. [4] 'Flying blind: Only 14 percent of companies surveyed have a comprehensive overview of generative AI usage,' LeanIX, June 18, 2024. [5] 'EY survey reveals that technology companies are setting the pace of agentic AI – will others follow suit?' EY, May 14, 2025. [6] '2025 Generative AI in Professional Services Report,' Thomson Reuters, February 2025.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store