Latest news with #NetskopeThreatLabs


Techday NZ
3 days ago
- Business
- Techday NZ
Netskope unveils AI Copilot & MCP server to advance zero trust
Netskope has announced new artificial intelligence capabilities for its security platform, including an AI-powered assistant aimed at optimising zero trust network access. The company is introducing several advancements to its Netskope One platform, notably the Netskope One Copilot for Private Access that leverages AI to improve the deployment of universal zero trust network access (UZTNA), and a preview of the Netskope Model Context Protocol (MCP) server, which is designed to connect large language models (LLMs) to Netskope's policy controls. AI and security Netskope Threat Labs reports that shadow AI – the unsanctioned use of AI applications by employees – now constitutes the majority of AI usage within enterprises. This growth is attributed to the proliferation of SaaS AI applications, on-premises AI deployments, and custom AI tools. These developments have led to increased demand for adaptive security solutions that allow businesses to use AI tools securely and effectively. The new AI features are said to enable safe user access to AI-driven applications and mitigate risks associated with the adoption and creation of AI software. According to the company, the platform provides insights into sensitive data being processed by LLMs and uses AI models to assess risks, aiding in the implementation of context-based decisions around application choices and policy configurations. The Netskope One platform utilises zero trust principles, powered by a suite known as SkopeAI, which is a set of proprietary AI technologies, to support secure connectivity for remote workers, data security, and threat mitigation. Enhancing zero trust network access The primary addition, Netskope One Copilot for Private Access, is designed to tackle challenges associated with traditional ZTNA, such as complicated policy design, excessive and broad access rules, and risk of policy sprawl. By employing AI, it automates the recommendation of granular policies for both newly discovered and existing applications. This extends Netskope's UZTNA solution and is intended to go beyond mere access brokering, offering continuous enforcement of policies, protection against threats, integrated data safeguards, monitoring of system performance, and a wider range of access controls. Industry analysts have repeatedly highlighted Netskope for its capabilities in ZTNA. The company has been named a Leader in Gartner's Magic Quadrant for Security Service Edge (SSE) for four consecutive years, and topped the Critical Capabilities for SSE report in the Private Application Access Use Case which specifically addresses ZTNA functions. Netskope One Copilot for Private Access is available to current customers and supplements other AI Copilots offered by the company, such as the Copilot for Cloud Confidence Index, with additional AI Copilots in development. MCP server preview Netskope is also releasing a preview of its Model Context Protocol server. The server connects LLMs - such as Claude Desktop, Microsoft Copilot, Google Vertex, and Amazon Bedrock - directly to Netskope One platform capabilities. This connection is intended to help enterprises use LLMs securely by arming them with the necessary policy context and access controls. The MCP server is built on an open protocol and functions as a bridge between LLMs and Netskope Management APIs, allowing LLMs to gain situational awareness from a customer's environment for better analysis and automation. Use cases provided by Netskope include: Client version analysis for device management teams to identify and address non-compliant clients Incident analysis tools to support security teams during Data Loss Prevention incidents, providing summary reports and investigation recommendations Incident status analysis to help incident managers identify delays or bottlenecks in resolution workflows Insider risk analysis for security administrators to prioritise users deemed at higher risk for expedited intervention "Netskope's differentiated AI security capabilities not only enable safe user access to AI applications, but also manage the emerging risks introduced by the adoption and building of AI applications, provide a deep understanding of sensitive data being fed into LLMs, and assess risk using AI models to make context-based decisions on application selection and policy setting. The Netskope One platform and its purpose-built architecture apply zero trust principles and leverage SkopeAI, Netskope's suite of proprietary AI innovations and patented technology, to optimise access, protect data, stop threats, and enable secure, work-from-anywhere connectivity." The preview of the MCP server comes with several sample prompts tailored to address real-time AI security scenarios, expanding the platform's support for enterprise AI integration and safeguarding. Netskope states that these additions are intended to provide viable alternatives to existing VPN and NAC solutions, and to address both current and evolving security challenges in an environment with increasing AI adoption in the workplace.


Techday NZ
5 days ago
- Business
- Techday NZ
Shadow AI surge heightens enterprise security risks, study finds
Netskope research has highlighted a significant increase in generative AI (genAI) platform and AI agent usage in workplaces, amplifying data security concerns, especially through unsanctioned or "shadow AI" applications. The findings, detailed in the latest Netskope Threat Labs Cloud and Threat Report, show a 50% rise in genAI platform users among enterprise employees over the three months to May 2025. This increase comes as enterprises broadly enable sanctioned SaaS genAI apps and agentic AI but face growing security challenges as shadow AI persists. Growth of shadow AI The report indicates that while organisations continue efforts to safely adopt genAI across SaaS and on-premises environments, over half of all AI application adoption is now estimated to fall under the shadow AI category. These applications are not officially sanctioned by IT departments, raising concerns about uncontrolled access to sensitive data and potential compliance issues. GenAI platforms, which provide foundational infrastructure for organisations to develop bespoke AI applications and agents, are cited as the fastest-growing segment of shadow AI. In just three months, uptake among end-users rose by 50%, and network traffic linked to these platforms grew by 73%. In May, 41% of surveyed organisations were using at least one genAI platform, with Microsoft Azure OpenAI, Amazon Bedrock, and Google Vertex AI being the most commonly adopted. "The rapid growth of shadow AI places the onus on organisations to identify who is creating new AI apps and AI agents using genAI platforms and where they are building and deploying them," said Ray Canzanese, Director of Netskope Threat Labs. "Security teams don't want to hamper employee end users' innovation aspirations, but AI usage is only going to increase. To safeguard this innovation, organisations need to overhaul their AI app controls and evolve their DLP policies to incorporate real-time user coaching elements." On-premises AI and agentic use Organisations are increasingly exploring on-premises AI solutions, from deploying genAI through local GPU resources to integrating on-premises tools with SaaS applications. The report finds that 34% of organisations are using large language model (LLM) interfaces locally, with Ollama showing the highest adoption, followed by LM Studio and Ramalama at lower levels. Employee use of AI resources accelerates through downloads from AI marketplaces such as Hugging Face, used by users in 67% of organisations, suggesting widespread experimentation and tool-building among staff. AI agents, which automate tasks and access sensitive enterprise data, are also proliferating, with GitHub Copilot now used in 39% of organisations and 5.5% reporting on-premises deployment of agents built from popular frameworks. "More organisations are starting to use genAI platforms to deploy models for inference because of the flexibility and privacy that these frameworks provide. They essentially give you a single interface through which you can use any model you want – even your own custom model – while providing you a secure and scalable environment to run your app without worrying about sharing your sensitive data with a SaaS vendor. We are already seeing rapid adoption of these frameworks and expect that to continue into the future, underscoring the importance of continuously monitoring for shadow AI in your environment," said Canzanese. "More people are starting to explore the possibilities that AI agents provide, choosing to either do so on-prem or using genAI platforms. Regardless of the platform chosen, AI agents are typically granted access to sensitive data and permitted to perform autonomous actions, underscoring the need for organisations to shed light on who is developing agents and where they are being deployed, to ensure that they are properly secured and monitored. Nobody wants shadow AI agents combing through their sensitive data," Canzanese added. Shadow AI agents and risks The prevalence of shadow AI agents is a particular concern as they act autonomously and can interact extensively with enterprise data. API traffic analysis revealed that 66% of organisations have users making calls to and 13% to indicating high-volume programmatic access to third-party AI services. "The newest form of shadow AI is the shadow AI agent -- they are like a person coming into your office every day, handling your data, taking actions on your systems, all while not being background checked or having security monitoring in place. Identifying who is using agentic AI and putting policies in place for their use should be an urgent priority for every organisation," said James Robinson, Chief Information Security Officer. Trends in SaaS genAI Netskope's dataset now includes more than 1,550 genAI SaaS applications, a sharp increase from 317 in February. Organisations now employ about 15 distinct genAI apps on average, up two from earlier in the year. Monthly data uploaded to these applications also rose from 7.7 GB to 8.2 GB quarter on quarter. Security teams' efforts to enable and monitor these tools are credited with a shift towards purpose-built suites such as Gemini and Copilot, which are designed to integrate with business productivity software. However, general-purpose chatbot ChatGPT has seen its first decrease in enterprise adoption since tracking began in 2023. Meanwhile, other genAI applications, including Anthropic Claude, Perplexity AI, Grammarly, Gamma, and Grok, have all recorded gains, with Grok also appearing in the top 10 most-used apps list for the first time. Guidance for security leaders Given the accelerating complexity of enterprise AI use, Netskope advises security leaders to assess which genAI applications are in use, strengthen application controls, conduct inventories of any local infrastructure, and ensure continuous monitoring of AI activity. Collaboration with employees experimenting with agentic AI is also recommended to develop practical policies and mitigate risks effectively.


Techday NZ
29-04-2025
- Business
- Techday NZ
Netskope One upgrades boost AI data protection & visibility
Netskope has announced new advancements to its Netskope One platform aimed at broadening AI security coverage, including enhancements to its data security posture management (DSPM) features and protections for private applications. These updates come as enterprises continue to expand their use of artificial intelligence applications, generating a more intricate digital landscape that heightens the complexity of security challenges. While several security vendors have focused on facilitating safe user access to AI tools, Netskope said its approach is centred around understanding and managing the risks posed by the widespread adoption and development of AI applications. This includes tracking sensitive data entering large language models (LLMs) and assessing risks associated with AI models for informed policy decisions. The Netskope One platform, powered by the company's SkopeAI technology, provides protection for a range of AI use cases. It focuses on safeguarding AI use by monitoring users, agents, data, and applications, providing complete visibility and real-time contextual controls across enterprise environments. According to research from Netskope Threat Labs in its 2025 Generative AI Cloud and Threat Report, organisations saw a thirtyfold increase in the volume of data sent to generative AI (genAI) applications by internal users over the past year. The report noted that much of this increase can be attributed to "shadow AI" usage, where employees use personal accounts to access genAI tools at work. Findings show that 72% of genAI users continue to use personal accounts for workplace interaction with applications such as ChatGPT, Google Gemini, and Grammarly. The report underscored the need for a cohesive and comprehensive approach to securing all dimensions of AI within business operations. Netskope's latest platform improvements include new DSPM capabilities, giving organisations expanded end-to-end oversight and control of data stores used for training both public and private LLMs. These enhancements allow organisations to prevent sensitive or regulated data from mistakenly being used in LLM training or fine-tuning, whether accessed directly or via Retrieval-Augmented Generation (RAG) techniques. DSPM plays a key role in highlighting at-risk structured and unstructured data across SaaS, IaaS, PaaS, and on-premises infrastructure. The strengthened DSPM also enables organisations to assess AI risk in the context of their data, leveraging classification capabilities powered by Netskope's data loss prevention (DLP) engine and exposure assessments. Security teams are then able to identify priority risks more efficiently and adopt policies that are better aligned with those risks. Policy-driven AI governance is further facilitated by Netskope One, which now automates the detection and enforcement of rules about what data can be used in AI, dependent on data classification, source, or its specific use. When combined with inline enforcement controls, this provides greater assurance that only authorised data is involved in model training, inference, or responding to prompts. Sanjay Beri, Chief Executive Officer of Netskope, said, "Organisations need to know that the data feeding into any part of their AI ecosystem is safe throughout every phase of the interaction, recognizing how that data can be used in applications, accessed by users, and incorporated into AI agents. In conversations I've had with leaders throughout the world, I'm consistently answering the same question: 'How can my organisation fast track the development and deployment of AI applications to support the business without putting company data in harm's way at any point in the process?' Netskope One takes the mystery out of AI, helping organisations to take their AI journeys driven by the full context of AI interactions and protecting data throughout." Customers are currently using the Netskope One platform to enable business use of AI while maintaining security. With these updates, CCTV customers can secure AI across almost any scenario in their AI adoption journey. Using the new capabilities, organisations can form a consistent basis for AI readiness by comprehending what data is used to train LLMs, whether through public generative AI platforms or custom-built models. The platform supports security and trust by supporting discovery, classification, and labelling of data, and by enforcing DLP policies. This helps prevent data poisoning and ensures appropriate data governance throughout the lifecycle. Netskope One also provides organisations with a comprehensive overview of AI activity within the enterprise. Security teams are able to monitor user behaviour, track both personal and enterprise-sanctioned application usage, and protect sensitive information across both managed and unmanaged environments. The Netskope Cloud Confidence Index (CCI) provides structured risk analyses across more than 370 genAI applications and over 82,000 SaaS applications, giving organisations better foresight on risks such as data use, third-party sharing, and model training practices. Additionally, security teams can employ granular protection through adaptive risk context. This enables policy enforcement beyond simple permissions, implementing controls based on user behaviour and data sensitivity, and mitigating "shadow AI" by directing users toward approved platforms like Microsoft Copilot and ChatGPT Enterprise. Actions such as uploading, downloading, copying, and printing within AI applications can be controlled to lower the risk profile, and the advanced DLP can monitor both prompts and AI-generated responses to prevent unintentional exposure of sensitive or regulated data.