
Netskope One upgrades boost AI data protection & visibility
These updates come as enterprises continue to expand their use of artificial intelligence applications, generating a more intricate digital landscape that heightens the complexity of security challenges. While several security vendors have focused on facilitating safe user access to AI tools, Netskope said its approach is centred around understanding and managing the risks posed by the widespread adoption and development of AI applications. This includes tracking sensitive data entering large language models (LLMs) and assessing risks associated with AI models for informed policy decisions.
The Netskope One platform, powered by the company's SkopeAI technology, provides protection for a range of AI use cases. It focuses on safeguarding AI use by monitoring users, agents, data, and applications, providing complete visibility and real-time contextual controls across enterprise environments.
According to research from Netskope Threat Labs in its 2025 Generative AI Cloud and Threat Report, organisations saw a thirtyfold increase in the volume of data sent to generative AI (genAI) applications by internal users over the past year. The report noted that much of this increase can be attributed to "shadow AI" usage, where employees use personal accounts to access genAI tools at work. Findings show that 72% of genAI users continue to use personal accounts for workplace interaction with applications such as ChatGPT, Google Gemini, and Grammarly. The report underscored the need for a cohesive and comprehensive approach to securing all dimensions of AI within business operations.
Netskope's latest platform improvements include new DSPM capabilities, giving organisations expanded end-to-end oversight and control of data stores used for training both public and private LLMs. These enhancements allow organisations to prevent sensitive or regulated data from mistakenly being used in LLM training or fine-tuning, whether accessed directly or via Retrieval-Augmented Generation (RAG) techniques. DSPM plays a key role in highlighting at-risk structured and unstructured data across SaaS, IaaS, PaaS, and on-premises infrastructure.
The strengthened DSPM also enables organisations to assess AI risk in the context of their data, leveraging classification capabilities powered by Netskope's data loss prevention (DLP) engine and exposure assessments. Security teams are then able to identify priority risks more efficiently and adopt policies that are better aligned with those risks.
Policy-driven AI governance is further facilitated by Netskope One, which now automates the detection and enforcement of rules about what data can be used in AI, dependent on data classification, source, or its specific use. When combined with inline enforcement controls, this provides greater assurance that only authorised data is involved in model training, inference, or responding to prompts.
Sanjay Beri, Chief Executive Officer of Netskope, said, "Organisations need to know that the data feeding into any part of their AI ecosystem is safe throughout every phase of the interaction, recognizing how that data can be used in applications, accessed by users, and incorporated into AI agents. In conversations I've had with leaders throughout the world, I'm consistently answering the same question: 'How can my organisation fast track the development and deployment of AI applications to support the business without putting company data in harm's way at any point in the process?' Netskope One takes the mystery out of AI, helping organisations to take their AI journeys driven by the full context of AI interactions and protecting data throughout."
Customers are currently using the Netskope One platform to enable business use of AI while maintaining security. With these updates, CCTV customers can secure AI across almost any scenario in their AI adoption journey.
Using the new capabilities, organisations can form a consistent basis for AI readiness by comprehending what data is used to train LLMs, whether through public generative AI platforms or custom-built models. The platform supports security and trust by supporting discovery, classification, and labelling of data, and by enforcing DLP policies. This helps prevent data poisoning and ensures appropriate data governance throughout the lifecycle.
Netskope One also provides organisations with a comprehensive overview of AI activity within the enterprise. Security teams are able to monitor user behaviour, track both personal and enterprise-sanctioned application usage, and protect sensitive information across both managed and unmanaged environments. The Netskope Cloud Confidence Index (CCI) provides structured risk analyses across more than 370 genAI applications and over 82,000 SaaS applications, giving organisations better foresight on risks such as data use, third-party sharing, and model training practices.
Additionally, security teams can employ granular protection through adaptive risk context. This enables policy enforcement beyond simple permissions, implementing controls based on user behaviour and data sensitivity, and mitigating "shadow AI" by directing users toward approved platforms like Microsoft Copilot and ChatGPT Enterprise. Actions such as uploading, downloading, copying, and printing within AI applications can be controlled to lower the risk profile, and the advanced DLP can monitor both prompts and AI-generated responses to prevent unintentional exposure of sensitive or regulated data.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
13 hours ago
- Techday NZ
AI at work: Five hard truths every business leader needs to hear
If you're feeling behind on AI, you're not alone. According to Talent's latest AI survey, nearly 48% of organisations say they're still in the experimental or pilot phase of AI adoption. This figure might sound like a red flag but according to their experts, it's a natural and necessary step. In their most recent webinar, 'What's next: How is AI really changing the way we work?', they unpacked the realities of AI adoption with two sharp minds in tech and recruitment: JP Browne, Practise Lead from Talent Auckland, and Jack Jorgensen, General Manager – Data, AI & Innovation at Avec, their IT consultancy arm. Together, they explored the real blockers, risks, and opportunities leaders need to wrap their heads around in 2025. 1. Most companies are still figuring it out The gap between AI hype and delivery is wide, and tinkering with tools like ChatGPT doesn't mean your business is ready to run AI in production. As Jack points out, "There's a big difference between punching in a search query and building something deterministic and robust enough to run in enterprise systems […] Having organisations stuck in that pilot stage isn't a bad thing. It means they're finding the limitations of the tech and discovering what it can actually do well" The main takeaway both experts emphasised were: Don't rush to a "full rollout." Use the pilot phase to build guardrails, clean up your data, and decide what AI is actually for in your business. 2. Executive urgency doesn't equal ownership Talent's recent AI survey found that for 31% of organisations, IT or technology departments are seen as the primary drivers of AI adoption. Alternatively, Jack has observed that, "IT isn't driving AI, they're just putting up the guardrails. However, because execs don't know who should own it, they're lumping it in tech's lap." According to JP, "For the first time ever, I've got IT leaders saying, 'We can't implement what you want until we've fixed security and infrastructure.'" 41% of leaders say their biggest blockers are lack of strategy and unclear goals. Execs want AI yesterday but, without a clear owner or roadmap, most strategies stall. The result? IT teams are stuck between enabling the business and playing the bad guy. And without a cohesive plan, budgets dry up fast. 3. People are nervous In the webinar, JP stated, "You can't bury your head in the sand. AI's affecting workflows and job design, and people are understandably unsure where they fit." However, in the midst of such concerns, Jack reassured, "I'm seeing less job displacement and more evolution. But we need to be honest about where AI changes the game." The fear around AI is real, and it isn't just about job losses. Talent's AI survey showed: 60% are concerned about ethics or compliance risks 58% fear loss of human oversight 57% worry about inaccuracy and hallucinations Business leaders need to address these fears head-on, not just with reassurance but with transparent, actionable education. 4. Security is the #1 barrier – and that's a good thing 46.2% of leaders said security concerns are the top reason they're cautious about AI, and Talent experts say that's the right instinct. Between real-world data breaches and shadow AI usage, the risks are everywhere. "If I could rate that 46% stat above 100%, I would. Security and compliance should be front of mind. Full stop," shared Jack. From accidental uploads of entire CRMs into ChatGPT (yes, that really happened) to AI-generated code opening up backdoors for attackers, this is not the time to "move fast and break things." 5. AI is quietly changing workforce planning The shift is subtle, but it's coming. One in four leaders say they're actively exploring how AI might reshape the roles they hire for and 12.1% surveyed are already using it to reduce manual work. As a longtime recruiter in New Zealand, JP shares his observations, "We're not seeing mass hiring of AI engineers, but we are seeing increased demand for system engineers and data people." While AI isn't replacing people yet, it is changing the kind of people you need. Conclusion: AI readiness is a journey, not a silver bullet From security fears to strategy gaps, the state of AI in business today is still murky, but that's not a reason to stall. As Jack puts it, "If you're jumping in without looking, you're probably going to break your ankles. But if you plan, pilot, and build velocity? That's the win." So, the real question isn't whether AI should be part of your business because it already is, but do you know where, how, and why it's showing up? Want to find out what else Talent's AI survey revealed? Access the full report.


Techday NZ
16 hours ago
- Techday NZ
AI at work: 5 hard truths every business leader needs to hear
If you're feeling behind on AI, you're not alone. According to Talent's latest AI survey, nearly 48% of organisations say they're still in the experimental or pilot phase of AI adoption. This figure might sound like a red flag but according to their experts, it's a natural and necessary step. In their most recent webinar, 'What's next: How is AI really changing the way we work?', they unpacked the realities of AI adoption with two sharp minds in tech and recruitment: JP Browne, Practise Lead from Talent Auckland, and Jack Jorgensen, General Manager – Data, AI & Innovation at Avec, their IT consultancy arm. Together, they explored the real blockers, risks, and opportunities leaders need to wrap their heads around in 2025. 1. Most companies are still figuring it out The gap between AI hype and delivery is wide, and tinkering with tools like ChatGPT doesn't mean your business is ready to run AI in production. As Jack points out, "There's a big difference between punching in a search query and building something deterministic and robust enough to run in enterprise systems […] Having organisations stuck in that pilot stage isn't a bad thing. It means they're finding the limitations of the tech and discovering what it can actually do well" The main takeaway both experts emphasised were: Don't rush to a "full rollout." Use the pilot phase to build guardrails, clean up your data, and decide what AI is actually for in your business. 2. Executive urgency doesn't equal ownership Talent's recent AI survey found that for 31% of organisations, IT or technology departments are seen as the primary drivers of AI adoption. Alternatively, Jack has observed that, "IT isn't driving AI, they're just putting up the guardrails. However, because execs don't know who should own it, they're lumping it in tech's lap." According to JP, "For the first time ever, I've got IT leaders saying, 'We can't implement what you want until we've fixed security and infrastructure.'" 41% of leaders say their biggest blockers are lack of strategy and unclear goals. Execs want AI yesterday but, without a clear owner or roadmap, most strategies stall. The result? IT teams are stuck between enabling the business and playing the bad guy. And without a cohesive plan, budgets dry up fast. 3. People are nervous In the webinar, JP stated, "You can't bury your head in the sand. AI's affecting workflows and job design, and people are understandably unsure where they fit." However, in the midst of such concerns, Jack reassured, "I'm seeing less job displacement and more evolution. But we need to be honest about where AI changes the game." The fear around AI is real, and it isn't just about job losses. Talent's AI survey showed: 60% are concerned about ethics or compliance risks 58% fear loss of human oversight 57% worry about inaccuracy and hallucinations Business leaders need to address these fears head-on, not just with reassurance but with transparent, actionable education. 4. Security is the #1 barrier – and that's a good thing 46.2% of leaders said security concerns are the top reason they're cautious about AI, and Talent experts say that's the right instinct. Between real-world data breaches and shadow AI usage, the risks are everywhere. "If I could rate that 46% stat above 100%, I would. Security and compliance should be front of mind. Full stop," shared Jack. From accidental uploads of entire CRMs into ChatGPT (yes, that really happened) to AI-generated code opening up backdoors for attackers, this is not the time to "move fast and break things." 5. AI is quietly changing workforce planning The shift is subtle, but it's coming. One in four leaders say they're actively exploring how AI might reshape the roles they hire for and 12.1% surveyed are already using it to reduce manual work. As a longtime recruiter in New Zealand, JP shares his observations, "We're not seeing mass hiring of AI engineers, but we are seeing increased demand for system engineers and data people." While AI isn't replacing people yet, it is changing the kind of people you need. Conclusion: AI readiness is a journey, not a silver bullet From security fears to strategy gaps, the state of AI in business today is still murky, but that's not a reason to stall. As Jack puts it, "If you're jumping in without looking, you're probably going to break your ankles. But if you plan, pilot, and build velocity? That's the win." So, the real question isn't whether AI should be part of your business because it already is, but do you know where, how, and why it's showing up? Want to find out what else Talent's AI survey revealed? Access the full report.

RNZ News
a day ago
- RNZ News
ChatGPT upgrade 'like talking to a PhD-level expert'
By Lisa Eadicicco , CNN Photo: AFP / JONATHAN RAA This week, OpenAI has launched GPT-5 - an upgraded version of the AI model behind ChatGPT that the company claims is significantly faster and more capable than its predecessor. The AI giant has faced increased competition, amid growing concerns about AI's impact on mental health and future jobs. GPT-5, which launched across OpenAI's free and paid tiers, will make ChatGPT better at tasks like writing, coding and answering health-related questions, OpenAI claimed. It also promised to prevent the popular chatbot from hallucinating as often and deceiving users when it could not answer a question. New models like GPT-5 are important, because they determine how services like ChatGPT function and the new capabilities they will support, making them an essential part of its future direction. OpenAI chief executive Sam Altman said GPT-5 was a "significant step" on the path toward artificial general intelligence, a hypothetical point at which AI can match human-level thinking. "GPT-4 felt like you're kind of talking to a college student," Altman said. "GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert." In addition to ChatGPT, the new model was also available for developers who built tools and services based on OpenAI's technology. Software development was a big area of focus for GPT-5. Altman said the model could generate "an entire piece of software" for you - a practice colloquially known as "vibe coding". In a demonstration, OpenAI showed how ChatGPT could create a website for learning French, after typing in a prompt asking it to do so. Altman said use cases like this would be a "defining part of the new GPT-5 era". GPT-5 had arrived, as coders increasingly used AI to handle certain parts of their job. Meta chief executive Mark Zuckerberg previously said he expected about half the company's code to be written by AI next year. Twenty to 30 percent of Microsoft's code is written by AI, chief executive Satya Nadella said. Anthropic chief executive Dario Amodei sparked fears over AI impacting jobs in May, when he said he believed the technology could lead to a spike in unemployment. OpenAI's new model also hit on another concern around the use of AI - it could be deceptive, as research from Anthropic and AI research firm Apollo Research had shown, or provide incorrect information. Previously, the system would say it could do a task that it could not complete or refer to an image that was not there, said OpenAI safety research lead Alex Beutel. With GPT-5, the company said it had trained the model to be honest in these types of scenarios. GPT-5 would also be more careful about how it answered questions to queries that could be potentially harmful. While the company did not provide a specific example of how that would look in practice, it said the model would aim to give an answer that was helpful "within the constraints of remaining safe", which may involve giving high-level answers that are not too specific. The update came after concerns were raised about people becoming too reliant on AI assistants, particularly emotionally, raising questions about the technology's impact on mental wellbeing. A man in Idaho, for example, told CNN that ChatGPT sparked a spiritual awakening for him, after he began discussing topics like religion with the chatbot. His wife said it put a lot of strain on their family. OpenAI is widely considered the frontrunner in AI, thanks to ChatGPT, which is now on track to hit 700 million weekly active users, but the competition continues to grow, especially among younger users. Research from web analytics company SimilarWeb suggested AI search app Perplexity, DeepSeek, Anthropic's Claude and xAI's Grok all had higher app usage among 18-34 year olds. Zuckerberg is attempting to snatch up top-tier AI talent with reported multimillion dollar pay packages to get ahead in the AI race. The launch also came at a period of growth and expansion for AI. The tech giant now played a bigger role in education and the government in the United States, recently striking a partnership with classroom software provider Instructure and launching a study mode for ChatGPT. OpenAI had also worked with the Trump administration on projects like the US$500 billion ($NZ840 billion) AI infrastructure project known as Stargate. Altman had appeared in Washington several times and OpenAI recently confirmed planned to open its first office in the US capital city. - CNN