logo
Shadow AI surge heightens enterprise security risks, study finds

Shadow AI surge heightens enterprise security risks, study finds

Techday NZ5 days ago
Netskope research has highlighted a significant increase in generative AI (genAI) platform and AI agent usage in workplaces, amplifying data security concerns, especially through unsanctioned or "shadow AI" applications.
The findings, detailed in the latest Netskope Threat Labs Cloud and Threat Report, show a 50% rise in genAI platform users among enterprise employees over the three months to May 2025. This increase comes as enterprises broadly enable sanctioned SaaS genAI apps and agentic AI but face growing security challenges as shadow AI persists.
Growth of shadow AI
The report indicates that while organisations continue efforts to safely adopt genAI across SaaS and on-premises environments, over half of all AI application adoption is now estimated to fall under the shadow AI category. These applications are not officially sanctioned by IT departments, raising concerns about uncontrolled access to sensitive data and potential compliance issues.
GenAI platforms, which provide foundational infrastructure for organisations to develop bespoke AI applications and agents, are cited as the fastest-growing segment of shadow AI. In just three months, uptake among end-users rose by 50%, and network traffic linked to these platforms grew by 73%. In May, 41% of surveyed organisations were using at least one genAI platform, with Microsoft Azure OpenAI, Amazon Bedrock, and Google Vertex AI being the most commonly adopted. "The rapid growth of shadow AI places the onus on organisations to identify who is creating new AI apps and AI agents using genAI platforms and where they are building and deploying them," said Ray Canzanese, Director of Netskope Threat Labs. "Security teams don't want to hamper employee end users' innovation aspirations, but AI usage is only going to increase. To safeguard this innovation, organisations need to overhaul their AI app controls and evolve their DLP policies to incorporate real-time user coaching elements."
On-premises AI and agentic use
Organisations are increasingly exploring on-premises AI solutions, from deploying genAI through local GPU resources to integrating on-premises tools with SaaS applications. The report finds that 34% of organisations are using large language model (LLM) interfaces locally, with Ollama showing the highest adoption, followed by LM Studio and Ramalama at lower levels.
Employee use of AI resources accelerates through downloads from AI marketplaces such as Hugging Face, used by users in 67% of organisations, suggesting widespread experimentation and tool-building among staff. AI agents, which automate tasks and access sensitive enterprise data, are also proliferating, with GitHub Copilot now used in 39% of organisations and 5.5% reporting on-premises deployment of agents built from popular frameworks. "More organisations are starting to use genAI platforms to deploy models for inference because of the flexibility and privacy that these frameworks provide. They essentially give you a single interface through which you can use any model you want – even your own custom model – while providing you a secure and scalable environment to run your app without worrying about sharing your sensitive data with a SaaS vendor. We are already seeing rapid adoption of these frameworks and expect that to continue into the future, underscoring the importance of continuously monitoring for shadow AI in your environment," said Canzanese. "More people are starting to explore the possibilities that AI agents provide, choosing to either do so on-prem or using genAI platforms. Regardless of the platform chosen, AI agents are typically granted access to sensitive data and permitted to perform autonomous actions, underscoring the need for organisations to shed light on who is developing agents and where they are being deployed, to ensure that they are properly secured and monitored. Nobody wants shadow AI agents combing through their sensitive data," Canzanese added.
Shadow AI agents and risks
The prevalence of shadow AI agents is a particular concern as they act autonomously and can interact extensively with enterprise data. API traffic analysis revealed that 66% of organisations have users making calls to api.openai.com and 13% to api.anthropic.com, indicating high-volume programmatic access to third-party AI services. "The newest form of shadow AI is the shadow AI agent -- they are like a person coming into your office every day, handling your data, taking actions on your systems, all while not being background checked or having security monitoring in place. Identifying who is using agentic AI and putting policies in place for their use should be an urgent priority for every organisation," said James Robinson, Chief Information Security Officer.
Trends in SaaS genAI
Netskope's dataset now includes more than 1,550 genAI SaaS applications, a sharp increase from 317 in February. Organisations now employ about 15 distinct genAI apps on average, up two from earlier in the year.
Monthly data uploaded to these applications also rose from 7.7 GB to 8.2 GB quarter on quarter.
Security teams' efforts to enable and monitor these tools are credited with a shift towards purpose-built suites such as Gemini and Copilot, which are designed to integrate with business productivity software. However, general-purpose chatbot ChatGPT has seen its first decrease in enterprise adoption since tracking began in 2023.
Meanwhile, other genAI applications, including Anthropic Claude, Perplexity AI, Grammarly, Gamma, and Grok, have all recorded gains, with Grok also appearing in the top 10 most-used apps list for the first time.
Guidance for security leaders
Given the accelerating complexity of enterprise AI use, Netskope advises security leaders to assess which genAI applications are in use, strengthen application controls, conduct inventories of any local infrastructure, and ensure continuous monitoring of AI activity. Collaboration with employees experimenting with agentic AI is also recommended to develop practical policies and mitigate risks effectively.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Netskope unveils AI Copilot & MCP server to advance zero trust
Netskope unveils AI Copilot & MCP server to advance zero trust

Techday NZ

time3 days ago

  • Techday NZ

Netskope unveils AI Copilot & MCP server to advance zero trust

Netskope has announced new artificial intelligence capabilities for its security platform, including an AI-powered assistant aimed at optimising zero trust network access. The company is introducing several advancements to its Netskope One platform, notably the Netskope One Copilot for Private Access that leverages AI to improve the deployment of universal zero trust network access (UZTNA), and a preview of the Netskope Model Context Protocol (MCP) server, which is designed to connect large language models (LLMs) to Netskope's policy controls. AI and security Netskope Threat Labs reports that shadow AI – the unsanctioned use of AI applications by employees – now constitutes the majority of AI usage within enterprises. This growth is attributed to the proliferation of SaaS AI applications, on-premises AI deployments, and custom AI tools. These developments have led to increased demand for adaptive security solutions that allow businesses to use AI tools securely and effectively. The new AI features are said to enable safe user access to AI-driven applications and mitigate risks associated with the adoption and creation of AI software. According to the company, the platform provides insights into sensitive data being processed by LLMs and uses AI models to assess risks, aiding in the implementation of context-based decisions around application choices and policy configurations. The Netskope One platform utilises zero trust principles, powered by a suite known as SkopeAI, which is a set of proprietary AI technologies, to support secure connectivity for remote workers, data security, and threat mitigation. Enhancing zero trust network access The primary addition, Netskope One Copilot for Private Access, is designed to tackle challenges associated with traditional ZTNA, such as complicated policy design, excessive and broad access rules, and risk of policy sprawl. By employing AI, it automates the recommendation of granular policies for both newly discovered and existing applications. This extends Netskope's UZTNA solution and is intended to go beyond mere access brokering, offering continuous enforcement of policies, protection against threats, integrated data safeguards, monitoring of system performance, and a wider range of access controls. Industry analysts have repeatedly highlighted Netskope for its capabilities in ZTNA. The company has been named a Leader in Gartner's Magic Quadrant for Security Service Edge (SSE) for four consecutive years, and topped the Critical Capabilities for SSE report in the Private Application Access Use Case which specifically addresses ZTNA functions. Netskope One Copilot for Private Access is available to current customers and supplements other AI Copilots offered by the company, such as the Copilot for Cloud Confidence Index, with additional AI Copilots in development. MCP server preview Netskope is also releasing a preview of its Model Context Protocol server. The server connects LLMs - such as Claude Desktop, Microsoft Copilot, Google Vertex, and Amazon Bedrock - directly to Netskope One platform capabilities. This connection is intended to help enterprises use LLMs securely by arming them with the necessary policy context and access controls. The MCP server is built on an open protocol and functions as a bridge between LLMs and Netskope Management APIs, allowing LLMs to gain situational awareness from a customer's environment for better analysis and automation. Use cases provided by Netskope include: Client version analysis for device management teams to identify and address non-compliant clients Incident analysis tools to support security teams during Data Loss Prevention incidents, providing summary reports and investigation recommendations Incident status analysis to help incident managers identify delays or bottlenecks in resolution workflows Insider risk analysis for security administrators to prioritise users deemed at higher risk for expedited intervention "Netskope's differentiated AI security capabilities not only enable safe user access to AI applications, but also manage the emerging risks introduced by the adoption and building of AI applications, provide a deep understanding of sensitive data being fed into LLMs, and assess risk using AI models to make context-based decisions on application selection and policy setting. The Netskope One platform and its purpose-built architecture apply zero trust principles and leverage SkopeAI, Netskope's suite of proprietary AI innovations and patented technology, to optimise access, protect data, stop threats, and enable secure, work-from-anywhere connectivity." The preview of the MCP server comes with several sample prompts tailored to address real-time AI security scenarios, expanding the platform's support for enterprise AI integration and safeguarding. Netskope states that these additions are intended to provide viable alternatives to existing VPN and NAC solutions, and to address both current and evolving security challenges in an environment with increasing AI adoption in the workplace.

Shadow AI surge heightens enterprise security risks, study finds
Shadow AI surge heightens enterprise security risks, study finds

Techday NZ

time5 days ago

  • Techday NZ

Shadow AI surge heightens enterprise security risks, study finds

Netskope research has highlighted a significant increase in generative AI (genAI) platform and AI agent usage in workplaces, amplifying data security concerns, especially through unsanctioned or "shadow AI" applications. The findings, detailed in the latest Netskope Threat Labs Cloud and Threat Report, show a 50% rise in genAI platform users among enterprise employees over the three months to May 2025. This increase comes as enterprises broadly enable sanctioned SaaS genAI apps and agentic AI but face growing security challenges as shadow AI persists. Growth of shadow AI The report indicates that while organisations continue efforts to safely adopt genAI across SaaS and on-premises environments, over half of all AI application adoption is now estimated to fall under the shadow AI category. These applications are not officially sanctioned by IT departments, raising concerns about uncontrolled access to sensitive data and potential compliance issues. GenAI platforms, which provide foundational infrastructure for organisations to develop bespoke AI applications and agents, are cited as the fastest-growing segment of shadow AI. In just three months, uptake among end-users rose by 50%, and network traffic linked to these platforms grew by 73%. In May, 41% of surveyed organisations were using at least one genAI platform, with Microsoft Azure OpenAI, Amazon Bedrock, and Google Vertex AI being the most commonly adopted. "The rapid growth of shadow AI places the onus on organisations to identify who is creating new AI apps and AI agents using genAI platforms and where they are building and deploying them," said Ray Canzanese, Director of Netskope Threat Labs. "Security teams don't want to hamper employee end users' innovation aspirations, but AI usage is only going to increase. To safeguard this innovation, organisations need to overhaul their AI app controls and evolve their DLP policies to incorporate real-time user coaching elements." On-premises AI and agentic use Organisations are increasingly exploring on-premises AI solutions, from deploying genAI through local GPU resources to integrating on-premises tools with SaaS applications. The report finds that 34% of organisations are using large language model (LLM) interfaces locally, with Ollama showing the highest adoption, followed by LM Studio and Ramalama at lower levels. Employee use of AI resources accelerates through downloads from AI marketplaces such as Hugging Face, used by users in 67% of organisations, suggesting widespread experimentation and tool-building among staff. AI agents, which automate tasks and access sensitive enterprise data, are also proliferating, with GitHub Copilot now used in 39% of organisations and 5.5% reporting on-premises deployment of agents built from popular frameworks. "More organisations are starting to use genAI platforms to deploy models for inference because of the flexibility and privacy that these frameworks provide. They essentially give you a single interface through which you can use any model you want – even your own custom model – while providing you a secure and scalable environment to run your app without worrying about sharing your sensitive data with a SaaS vendor. We are already seeing rapid adoption of these frameworks and expect that to continue into the future, underscoring the importance of continuously monitoring for shadow AI in your environment," said Canzanese. "More people are starting to explore the possibilities that AI agents provide, choosing to either do so on-prem or using genAI platforms. Regardless of the platform chosen, AI agents are typically granted access to sensitive data and permitted to perform autonomous actions, underscoring the need for organisations to shed light on who is developing agents and where they are being deployed, to ensure that they are properly secured and monitored. Nobody wants shadow AI agents combing through their sensitive data," Canzanese added. Shadow AI agents and risks The prevalence of shadow AI agents is a particular concern as they act autonomously and can interact extensively with enterprise data. API traffic analysis revealed that 66% of organisations have users making calls to and 13% to indicating high-volume programmatic access to third-party AI services. "The newest form of shadow AI is the shadow AI agent -- they are like a person coming into your office every day, handling your data, taking actions on your systems, all while not being background checked or having security monitoring in place. Identifying who is using agentic AI and putting policies in place for their use should be an urgent priority for every organisation," said James Robinson, Chief Information Security Officer. Trends in SaaS genAI Netskope's dataset now includes more than 1,550 genAI SaaS applications, a sharp increase from 317 in February. Organisations now employ about 15 distinct genAI apps on average, up two from earlier in the year. Monthly data uploaded to these applications also rose from 7.7 GB to 8.2 GB quarter on quarter. Security teams' efforts to enable and monitor these tools are credited with a shift towards purpose-built suites such as Gemini and Copilot, which are designed to integrate with business productivity software. However, general-purpose chatbot ChatGPT has seen its first decrease in enterprise adoption since tracking began in 2023. Meanwhile, other genAI applications, including Anthropic Claude, Perplexity AI, Grammarly, Gamma, and Grok, have all recorded gains, with Grok also appearing in the top 10 most-used apps list for the first time. Guidance for security leaders Given the accelerating complexity of enterprise AI use, Netskope advises security leaders to assess which genAI applications are in use, strengthen application controls, conduct inventories of any local infrastructure, and ensure continuous monitoring of AI activity. Collaboration with employees experimenting with agentic AI is also recommended to develop practical policies and mitigate risks effectively.

Teradata upgrades ModelOps for scalable enterprise AI use
Teradata upgrades ModelOps for scalable enterprise AI use

Techday NZ

time30-07-2025

  • Techday NZ

Teradata upgrades ModelOps for scalable enterprise AI use

Teradata has introduced ModelOps updates to its ClearScape Analytics offering, targeting streamlined integration and deployment for Agentic AI and Generative AI applications as organisations transition from experimentation to production at scale. ModelOps platform The updated ModelOps platform aims to support analytics professionals and data scientists with native compatibility for open-source ONNX embedding models and leading cloud service provider large language model (LLM) APIs, including Azure OpenAI, Amazon Bedrock, and Google Gemini. With these enhancements, organisations can deploy, manage, and monitor AI models without having to rely on custom development, with newly added LLMOps capabilities designed to simplify workflows. For less technical users such as business analysts, ModelOps also integrates low-code AutoML tools, providing an interface that facilitates intuitive access for users of different skill levels. The platform's unified interface is intended to reduce onboarding time and increase productivity by offering consistent interactions across its entire range of tools. Challenges in AI adoption Many organisations encounter challenges when progressing from AI experimentation to enterprise-wide implementation. According to Teradata, the use of multiple LLM providers and the adoption of various open-source models can cause workflow fragmentation, limited interoperability, and steep learning curves, ultimately inhibiting wider adoption and slowing down innovation. Unified governance frameworks are often lacking, making it difficult for organisations to maintain reliability and compliance requirements as they scale their AI capabilities. These issues may cause generative and agentic AI projects to remain in isolation, rather than delivering integrated business insights. As a result, organisations could lose value if they are unable to effectively scale AI initiatives due to operational complexity and fragmented systems. Unified access and governance "The reality is that organisations will use multiple AI models and providers - it's not a question of if, but how, to manage that complexity effectively. Teradata's ModelOps offering provides the flexibility to work across combinations of models while maintaining trust and governance. Companies can then move confidently from experimentation to production, at scale, realising the full potential of their AI investments," said Sumeet Arora, Teradata's Chief Product Officer. Teradata's ModelOps strategy is designed to provide unified access to a range of AI models and workflows, while maintaining governance and ease of use. This is intended to allow business users to deploy AI models quickly and safely, supporting both experimentation and production use. An example scenario described by Teradata involved a bank seeking to improve its digital customer experience and retention rates by analysing customer feedback across channels. The unified ModelOps platform would allow the bank to consolidate multiple AI models - such as LLMs for sentiment analysis, embedding models for categorisation, and AutoML for predictive analytics - within one environment. The aim is to equip both technical and non-technical teams to act on customer intelligence at greater speed and scale. Key features The updated ModelOps capabilities in ClearScape Analytics include: Seamless Integration with Public LLM APIs : Users can connect with APIs from providers such as Azure OpenAI, Google Gemini, and Amazon Bedrock for a variety of LLMs, including Anthropic, Mistral, DeepSeek, and Meta. This integration supports secure registration, monitoring, observability, autoscaling, and usage analytics. Administrative options are available for retry policies, concurrency, and health or spend tracking at the project or model level. : Users can connect with APIs from providers such as Azure OpenAI, Google Gemini, and Amazon Bedrock for a variety of LLMs, including Anthropic, Mistral, DeepSeek, and Meta. This integration supports secure registration, monitoring, observability, autoscaling, and usage analytics. Administrative options are available for retry policies, concurrency, and health or spend tracking at the project or model level. Managing and monitoring LLMs with LLMOps : The platform supports rapid deployment of NVIDIA NIM LLMs within GPU environments. Features include LLM Model Cards for transparency, monitoring, and governance, as well as full lifecycle management - covering deployment, versioning, performance tracking, and retirement. : The platform supports rapid deployment of NVIDIA NIM LLMs within GPU environments. Features include LLM Model Cards for transparency, monitoring, and governance, as well as full lifecycle management - covering deployment, versioning, performance tracking, and retirement. ONNX Embedding Model Deployment : ClearScape Analytics natively supports ONNX embedding models and tokenisers, including support for Bring-Your-Own-Model workflows and unified deployment processes for custom vector search models. : ClearScape Analytics natively supports ONNX embedding models and tokenisers, including support for Bring-Your-Own-Model workflows and unified deployment processes for custom vector search models. Low-Code AutoML : Teams can create, train, monitor, and deploy models through an accessible low-code interface with performance monitoring and visual explainability features. : Teams can create, train, monitor, and deploy models through an accessible low-code interface with performance monitoring and visual explainability features. User Interface Improvements: The upgrade provides a unified user experience across all major tools, such as AutoML, Playground, Tables, and Datasets, with guided wizards and new table interaction options aimed at reducing skill barriers. Availability of the updated ModelOps in ClearScape Analytics is anticipated in the fourth quarter for users of AI Factory and VantageCloud platforms. Follow us on: Share on:

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store