6 days ago
Shadow AI surge heightens enterprise security risks, study finds
Netskope research has highlighted a significant increase in generative AI (genAI) platform and AI agent usage in workplaces, amplifying data security concerns, especially through unsanctioned or "shadow AI" applications.
The findings, detailed in the latest Netskope Threat Labs Cloud and Threat Report, show a 50% rise in genAI platform users among enterprise employees over the three months to May 2025. This increase comes as enterprises broadly enable sanctioned SaaS genAI apps and agentic AI but face growing security challenges as shadow AI persists.
Growth of shadow AI
The report indicates that while organisations continue efforts to safely adopt genAI across SaaS and on-premises environments, over half of all AI application adoption is now estimated to fall under the shadow AI category. These applications are not officially sanctioned by IT departments, raising concerns about uncontrolled access to sensitive data and potential compliance issues.
GenAI platforms, which provide foundational infrastructure for organisations to develop bespoke AI applications and agents, are cited as the fastest-growing segment of shadow AI. In just three months, uptake among end-users rose by 50%, and network traffic linked to these platforms grew by 73%. In May, 41% of surveyed organisations were using at least one genAI platform, with Microsoft Azure OpenAI, Amazon Bedrock, and Google Vertex AI being the most commonly adopted. "The rapid growth of shadow AI places the onus on organisations to identify who is creating new AI apps and AI agents using genAI platforms and where they are building and deploying them," said Ray Canzanese, Director of Netskope Threat Labs. "Security teams don't want to hamper employee end users' innovation aspirations, but AI usage is only going to increase. To safeguard this innovation, organisations need to overhaul their AI app controls and evolve their DLP policies to incorporate real-time user coaching elements."
On-premises AI and agentic use
Organisations are increasingly exploring on-premises AI solutions, from deploying genAI through local GPU resources to integrating on-premises tools with SaaS applications. The report finds that 34% of organisations are using large language model (LLM) interfaces locally, with Ollama showing the highest adoption, followed by LM Studio and Ramalama at lower levels.
Employee use of AI resources accelerates through downloads from AI marketplaces such as Hugging Face, used by users in 67% of organisations, suggesting widespread experimentation and tool-building among staff. AI agents, which automate tasks and access sensitive enterprise data, are also proliferating, with GitHub Copilot now used in 39% of organisations and 5.5% reporting on-premises deployment of agents built from popular frameworks. "More organisations are starting to use genAI platforms to deploy models for inference because of the flexibility and privacy that these frameworks provide. They essentially give you a single interface through which you can use any model you want – even your own custom model – while providing you a secure and scalable environment to run your app without worrying about sharing your sensitive data with a SaaS vendor. We are already seeing rapid adoption of these frameworks and expect that to continue into the future, underscoring the importance of continuously monitoring for shadow AI in your environment," said Canzanese. "More people are starting to explore the possibilities that AI agents provide, choosing to either do so on-prem or using genAI platforms. Regardless of the platform chosen, AI agents are typically granted access to sensitive data and permitted to perform autonomous actions, underscoring the need for organisations to shed light on who is developing agents and where they are being deployed, to ensure that they are properly secured and monitored. Nobody wants shadow AI agents combing through their sensitive data," Canzanese added.
Shadow AI agents and risks
The prevalence of shadow AI agents is a particular concern as they act autonomously and can interact extensively with enterprise data. API traffic analysis revealed that 66% of organisations have users making calls to and 13% to indicating high-volume programmatic access to third-party AI services. "The newest form of shadow AI is the shadow AI agent -- they are like a person coming into your office every day, handling your data, taking actions on your systems, all while not being background checked or having security monitoring in place. Identifying who is using agentic AI and putting policies in place for their use should be an urgent priority for every organisation," said James Robinson, Chief Information Security Officer.
Trends in SaaS genAI
Netskope's dataset now includes more than 1,550 genAI SaaS applications, a sharp increase from 317 in February. Organisations now employ about 15 distinct genAI apps on average, up two from earlier in the year.
Monthly data uploaded to these applications also rose from 7.7 GB to 8.2 GB quarter on quarter.
Security teams' efforts to enable and monitor these tools are credited with a shift towards purpose-built suites such as Gemini and Copilot, which are designed to integrate with business productivity software. However, general-purpose chatbot ChatGPT has seen its first decrease in enterprise adoption since tracking began in 2023.
Meanwhile, other genAI applications, including Anthropic Claude, Perplexity AI, Grammarly, Gamma, and Grok, have all recorded gains, with Grok also appearing in the top 10 most-used apps list for the first time.
Guidance for security leaders
Given the accelerating complexity of enterprise AI use, Netskope advises security leaders to assess which genAI applications are in use, strengthen application controls, conduct inventories of any local infrastructure, and ensure continuous monitoring of AI activity. Collaboration with employees experimenting with agentic AI is also recommended to develop practical policies and mitigate risks effectively.