
Netskope One upgrades boost AI data protection & visibility
Netskope has announced new advancements to its Netskope One platform aimed at broadening AI security coverage, including enhancements to its data security posture management (DSPM) features and protections for private applications.
These updates come as enterprises continue to expand their use of artificial intelligence applications, generating a more intricate digital landscape that heightens the complexity of security challenges. While several security vendors have focused on facilitating safe user access to AI tools, Netskope said its approach is centred around understanding and managing the risks posed by the widespread adoption and development of AI applications. This includes tracking sensitive data entering large language models (LLMs) and assessing risks associated with AI models for informed policy decisions.
The Netskope One platform, powered by the company's SkopeAI technology, provides protection for a range of AI use cases. It focuses on safeguarding AI use by monitoring users, agents, data, and applications, providing complete visibility and real-time contextual controls across enterprise environments.
According to research from Netskope Threat Labs in its 2025 Generative AI Cloud and Threat Report, organisations saw a thirtyfold increase in the volume of data sent to generative AI (genAI) applications by internal users over the past year. The report noted that much of this increase can be attributed to "shadow AI" usage, where employees use personal accounts to access genAI tools at work. Findings show that 72% of genAI users continue to use personal accounts for workplace interaction with applications such as ChatGPT, Google Gemini, and Grammarly. The report underscored the need for a cohesive and comprehensive approach to securing all dimensions of AI within business operations.
Netskope's latest platform improvements include new DSPM capabilities, giving organisations expanded end-to-end oversight and control of data stores used for training both public and private LLMs. These enhancements allow organisations to prevent sensitive or regulated data from mistakenly being used in LLM training or fine-tuning, whether accessed directly or via Retrieval-Augmented Generation (RAG) techniques. DSPM plays a key role in highlighting at-risk structured and unstructured data across SaaS, IaaS, PaaS, and on-premises infrastructure.
The strengthened DSPM also enables organisations to assess AI risk in the context of their data, leveraging classification capabilities powered by Netskope's data loss prevention (DLP) engine and exposure assessments. Security teams are then able to identify priority risks more efficiently and adopt policies that are better aligned with those risks.
Policy-driven AI governance is further facilitated by Netskope One, which now automates the detection and enforcement of rules about what data can be used in AI, dependent on data classification, source, or its specific use. When combined with inline enforcement controls, this provides greater assurance that only authorised data is involved in model training, inference, or responding to prompts.
Sanjay Beri, Chief Executive Officer of Netskope, said, "Organisations need to know that the data feeding into any part of their AI ecosystem is safe throughout every phase of the interaction, recognizing how that data can be used in applications, accessed by users, and incorporated into AI agents. In conversations I've had with leaders throughout the world, I'm consistently answering the same question: 'How can my organisation fast track the development and deployment of AI applications to support the business without putting company data in harm's way at any point in the process?' Netskope One takes the mystery out of AI, helping organisations to take their AI journeys driven by the full context of AI interactions and protecting data throughout."
Customers are currently using the Netskope One platform to enable business use of AI while maintaining security. With these updates, CCTV customers can secure AI across almost any scenario in their AI adoption journey.
Using the new capabilities, organisations can form a consistent basis for AI readiness by comprehending what data is used to train LLMs, whether through public generative AI platforms or custom-built models. The platform supports security and trust by supporting discovery, classification, and labelling of data, and by enforcing DLP policies. This helps prevent data poisoning and ensures appropriate data governance throughout the lifecycle.
Netskope One also provides organisations with a comprehensive overview of AI activity within the enterprise. Security teams are able to monitor user behaviour, track both personal and enterprise-sanctioned application usage, and protect sensitive information across both managed and unmanaged environments. The Netskope Cloud Confidence Index (CCI) provides structured risk analyses across more than 370 genAI applications and over 82,000 SaaS applications, giving organisations better foresight on risks such as data use, third-party sharing, and model training practices.
Additionally, security teams can employ granular protection through adaptive risk context. This enables policy enforcement beyond simple permissions, implementing controls based on user behaviour and data sensitivity, and mitigating "shadow AI" by directing users toward approved platforms like Microsoft Copilot and ChatGPT Enterprise. Actions such as uploading, downloading, copying, and printing within AI applications can be controlled to lower the risk profile, and the advanced DLP can monitor both prompts and AI-generated responses to prevent unintentional exposure of sensitive or regulated data.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Otago Daily Times
an hour ago
- Otago Daily Times
AI use intensifying scams: Netsafe
Artificial intelligence (AI) is enabling fraudsters to devise ever-slicker romance scams, Netsafe says. The online safety agency recently presented updated resources as part of its Get Set Up for Safety programme, aimed at protecting older people from an upswing in sophisticated digital cons. Business development manager Sarah Bramhall said scammers might spend weeks or months building online relationships before seeking money. "Scammers most often use the techniques or the emotions of trust, fear and hope, usually in a combination. "So they will tap into human emotions." Exploiting lonely or companionship-seeking victims, scammers try to stop them sharing information with friends or family. "They will try to keep them isolated so that they don't tell anyone, because obviously otherwise friends and family will pick up on something happening." At some point the scammer will begin requesting money, sometimes large amounts or gradually increasing amounts. These requests could be couched in ways that played on people's natural desire to be kind or helpful. "Usually it presents itself in something like a medical requirement, they need to travel, they have got family that are sick. "Those sorts of things that really play on emotions." Kind-hearted people who felt they had developed a bond would feel like they wanted to help that person out. "Most of the time, people really don't recognise that they are being scammed in those scenarios. "It is really quite hard for even support workers and family to get them to come to that realisation because they suffer heartbreak, essentially." Generative AI tools were enabling scammers to polish their English, generate fake images or create believable back-stories. Poor grammar or language used to be a red flag that it was a scam message. "That is getting harder to pick up on now," she said. While there were many ways AI was opening up useful and beneficial possibilities, it was important to be mindful of some of the drawbacks of AI, in particular large language models such as ChatGPT, which could create "hallucinations" that could seem plausible but were falsehoods. "I just say 'sometimes AI can lie'." Netsafe has refreshed its portfolio of resources that can help organisations and individuals navigate the online digital realm safely. The material tackles challenges such as spotting scams, safer online dating, privacy settings, securing accounts and verifying requests for personal information. Get Set Up for Safety offers a wide range of resources, including checklists, fact sheets, videos and interactive activities. • To find out more, visit


Techday NZ
2 days ago
- Techday NZ
HubSpot launches direct connector for ChatGPT CRM insights
HubSpot has introduced a direct integration between its customer relationship management platform and ChatGPT, allowing users to analyse CRM data in ChatGPT via natural language queries without any plugins or development work. The newly launched connector is designed for small and mid-sized businesses, requiring no dedicated data science teams and featuring a setup process that takes only minutes. HubSpot has underscored that privacy is an integral part of the connector, with CRM data remaining protected and not being used for AI model training within ChatGPT Team, Enterprise, or Edu environments. HubSpot stated that prior to this introduction, CRM data had been largely inaccessible to non-technical users seeking to utilise artificial intelligence for business insights. With the direct connector, marketers, sales personnel, and support teams can now ask ChatGPT to perform tasks such as surfacing pipeline risks, creating custom nurture campaigns, or forecasting ticket volumes, and then carry out those insights within the HubSpot platform. The connector is automatically available to all HubSpot customers across all subscription tiers, provided they have eligible ChatGPT plans. HubSpot reports that over 250,000 businesses currently rely on its system as their primary source of customer data, encompassing marketing, sales, and customer service functions. HubSpot said this aggregated view of the customer journey offers an advantage, especially given the increasing role of artificial intelligence in business operations. According to HubSpot, more than 75% of its customers already utilise ChatGPT. The company remarked that this new connector is intended to allow these users to more easily apply advanced analysis and research capabilities to their own proprietary customer data and to translate insights into action within HubSpot's tools. The connector therefore becomes a companion for common business use-cases. Within ChatGPT, marketers are able to request insights such as: "find my highest-converting cohorts from recent contacts and create a tailored nurture sequence to boost engagement", and then use those findings to launch automated workflows in HubSpot. Sales teams can ask ChatGPT to segment target companies by annual revenue, industry, and technology stack, and then identify top opportunities for enterprise expansion for further prospecting within HubSpot. Similarly, customer success teams can prompt: "identify inactive companies with growth potential and generate targeted plays to re-engage and revive pipeline," and then action those strategies in the Customer Success Workspace within HubSpot to drive retention. Support teams are offered the option to ask: "analyse seasonal patterns in ticket volume by category to forecast support team staffing needs for the upcoming quarter," and activate the Breeze Customer Agent in HubSpot to help manage anticipated support volume spikes. Nate Gonzalez, Head of Business Products at OpenAI, stated: "Launching the HubSpot deep research connector means businesses and their employees get faster, better insights because ChatGPT has more context. We're thrilled to work together to bring powerful AI to many of today's most important workflows." Colin Johnson, Senior Manager, CRM at Youth Enrichment Brands, commented: "The HubSpot connector is like having an extra analyst on the team, empowering sales reps to identify risks, opportunities, and next best actions. For a non-technical user, the fact that it's easy to use and talks directly to my data is huge." Karen Ng, SVP of Product and Partnerships at HubSpot, said: "We're building tools that help businesses lead through the AI shift, not just adapt to it. By connecting HubSpot CRM data directly to ChatGPT, even small teams without time or data resources can run deep analysis and take action on those insights - fueling better outcomes across marketing, sales, and service." The setup process is described as straightforward. HubSpot customers with administrative privileges can enable the connector by accessing ChatGPT, selecting the HubSpot deep research connector, choosing HubSpot as a data source, and authenticating their account. Post setup, any user within the organisation can toggle the feature on, sign in, and begin querying their data through natural language. HubSpot emphasised security considerations in the deployment of the connector. Users will only have access to CRM data permitted by their roles within HubSpot. Individual sales representatives, for example, will only be able to view pipeline data for deals where they have permission. The connector will be available automatically to all eligible HubSpot customers with paid ChatGPT plans, with coverage in the European Union including Team, Enterprise, and Edu subscriptions, and in other regions including Team, Enterprise, Pro, Plus, and Edu subscriptions. ChatGPT's language response is dictated by the language used in prompts.


Techday NZ
4 days ago
- Techday NZ
Skymel launches ARIA to unite major AI models in one platform
Skymel has launched ARIA, an artificial intelligence assistant that coordinates multiple major AI models to provide unified responses to user queries. ARIA combines the capabilities of well-known AI models such as ChatGPT, Claude, Gemini, and Perplexity into a single platform, allowing users to receive answers that draw on the collective strengths of these systems without the need to operate each model separately. The service aims to address challenges that users encounter when attempting to source comprehensive answers from different AI models, including the need to create complex prompts, switch between services, or handle multiple subscriptions. Neetu Pathak, Chief Executive Officer and Co-Founder of Skymel, said, "ARIA is a fundamentally different kind of AI assistant. For too long, people have had to jump between ChatGPT, Claude, Gemini, and others, often feeding the output of one model into another, just to get a complete answer. With ARIA, you simply ask your question or upload your file, and our system instantly orchestrates a custom workflow across every leading AI model, delivering a unified, accurate result. No technical expertise is required and there is no more guesswork. It's just the best of AI, every time." At the core of ARIA is an orchestration engine that analyses the user's query, selects the most suitable AI models for each aspect of the task, and processes a sequence of outputs across different models in real time. The final result is a single response, synthesised from the work of each model involved. Key features include automatic model selection—where ARIA divides tasks and assigns individual steps to the model best aligned with that requirement. The platform then manages the process in sequence, validating and refining outputs in pursuit of more accurate results. ARIA also learns continually from user interactions to optimise its model selection and workflow logic over time. Users interact with a streamlined interface, receiving comprehensive answers without needing to know which AI models are at work behind the scenes. The service is being offered for a flat monthly rate, intended to remove the need for multiple logins or hidden costs often associated with access to several AI platforms. Expanding on Skymel's vision for AI accessibility, Pathak said, "AI should work for you, not the other way around. With ARIA, we're making advanced AI accessible to everyone, with no prompt engineering, no technical barriers, just effortless intelligence that adapts to your needs. This is the future of AI assistance." Skymel has opened a private beta for early adopters who wish to take part in shaping the assistant's ongoing development. In addition, the company plans to launch its Orchestrator Agent SDK and API, providing developers with tools to integrate ARIA's adaptive AI pipeline capabilities into their own applications.