logo
Varonis, Microsoft deepen partnership to secure enterprise AI

Varonis, Microsoft deepen partnership to secure enterprise AI

Techday NZ15 hours ago
Varonis has entered a strategic partnership with Microsoft aimed at enhancing data security, governance, and compliance for organisations adopting artificial intelligence technologies.
The partnership will build upon existing product offerings that support secure implementation of Microsoft Copilot and deepen the integration between the Varonis Data Security Platform and Microsoft's suite of security tools. This will encompass Microsoft Purview and expand to deliver automated protection for sensitive data within the Microsoft ecosystem and additional environments.
Both companies have committed to an engineering-driven plan focused on addressing the risks posed by unauthorised access to data from AI tools, agents and large language models (LLMs), which is increasingly a concern as the use of workplace AI grows.
Yaki Faitelson, Chief Executive Officer and Co-Founder of Varonis, commented on the development: "Varonis built a world-class SaaS architecture on Microsoft Azure that protects the world's data and accelerates secure AI adoption. We are excited to expand our partnership with Microsoft, combining their innovation in AI with Varonis' deep expertise in data security."
Nick Parker, President of Industry and Partnerships at Microsoft, also reflected on the partnership: "Varonis' SaaS platform integrates the most advanced capabilities in Microsoft Azure. Through our collaboration with Varonis, we are empowering customers to embrace AI securely and confidently with enterprise-wide data security and governance powered by Microsoft Purview and Varonis."
The integration of Varonis with Microsoft Purview is expected to offer unified data classification, permissions enforcement, and policy management capabilities. This will target not only Microsoft 365 and Azure, but is also projected to extend to major SaaS and multi-cloud platforms such as Salesforce, Databricks, and ServiceNow.
Through this collaboration, organisations are expected to improve their ability to proactively reduce risk and streamline compliance efforts, particularly as the deployment of AI-driven and agent-based applications becomes more widespread in enterprise settings.
The announcement highlights the increasing focus on securing data integrity and compliance controls in the context of advanced technologies. The emphasis on automating protection and simplifying rule management for sensitive data reflects industry priorities as organisations continue to deploy AI-centric solutions.
The Varonis and Microsoft partnership signals ongoing cooperation between data security and cloud infrastructure providers as businesses adapt security and governance frameworks for evolving digital and regulatory environments.
Varonis is encouraging organisations to take a data-centric approach to cybersecurity by securing what it calls their "most valuable and vulnerable asset" with its unified data security platform.
The company's platform provides comprehensive protection, enabling businesses to reduce their blast radius and prevent data breaches by securing sensitive information directly at the source.
With increasing threats targeting unstructured data across cloud and on-premises environments, Varonis continues to position its solution as a critical layer of defence in a modern security strategy.
Follow us on:
Share on:
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Skills AI-driven shops want to see in developers
Skills AI-driven shops want to see in developers

Techday NZ

time4 hours ago

  • Techday NZ

Skills AI-driven shops want to see in developers

Architectural and system design thinking (problem-solving and critical thinking) As AI becomes more capable of generating code, developers should be both skilled code writers and strategic architects who focus on upfront design and system-level thinking. System architecture skills have become significantly more valuable because AI tools require proper structure, context, and guidance to generate quality code that delivers business value. Effective AI interaction, the critical validation of AI-generated outputs, and the debugging of AI-specific error patterns necessitate strong, continuously updated technical and coding foundations. Senior engineers now spend their time defining how systems connect to subsystems, establishing business logic, and building high-context environments for AI tools. Developers become orchestrators of the code, versus only the writers of the code—doing analysis and planning on the front end, then reviewing outputs to ensure they don't create technical debt. Well-engineered prompts mirror systems architecture documentation, containing clear functionality statements, domain expertise, and explicit constraints that produce predictable AI outputs. AI communication and context management (communication and collaboration) Working effectively with AI requires sophisticated communication skills that dramatically influence output quality. Developers must become proficient in the art of framing problems, providing appropriate context, and structuring interactions with AI systems. This skill becomes critical as teams transition from using AI tools to orchestrating complex AI-driven workflows across the development lifecycle. Modern prompt engineering focuses on designing process-oriented thinking that guides AI through complex tasks by defining clear goals, establishing constraints, and creating effective interaction rules. Developers must understand how to provide sufficient context without overwhelming AI systems and learn to iterate on feedback across multiple cycles. As AI agents increasingly participate in software development, teams must architect these interactions strategically, breaking complex problems into manageable chunks and building contextual workflows that align with business objectives. Ensuring quality & security (adaptability and continuous learning) As AI takes a more proactive role in software development, companies should develop specialised QA processes tailored to the unique error patterns and risks of AI-generated code. This should include validating AI reasoning processes, employing adversarial testing for both prompts and code, leveraging formal methods for critical components where appropriate, and implementing advanced, defense-in-depth prompt security measures. Organisations are responding by implementing "prompt security" practices to prevent injection attacks and establishing specialised review processes for AI-generated code. They're creating adversarial testing frameworks that deliberately challenge AI outputs with unusual inputs while maintaining human oversight at critical decision points. This represents a fundamental evolution from traditional debugging approaches to validating AI reasoning processes and ensuring business logic alignment—a necessary adaptation as AI becomes more autonomous in software development workflows. Follow us on: Share on:

GenAI adoption surges in healthcare but security hurdles remain
GenAI adoption surges in healthcare but security hurdles remain

Techday NZ

time4 hours ago

  • Techday NZ

GenAI adoption surges in healthcare but security hurdles remain

Ninety-nine percent of healthcare organisations are now making use of generative artificial intelligence (GenAI), according to new global research from Nutanix, but almost all say they face challenges in data security and scaling these technologies to production. The findings are drawn from the seventh annual Healthcare Enterprise Cloud Index (ECI) report by Nutanix, which surveyed 1,500 IT and engineering decision-makers across multiple industries and regions, including the healthcare sector. The research highlights both rapid uptake of GenAI in healthcare settings and significant ongoing barriers around infrastructure and privacy. GenAI use widespread, but risks loom Among healthcare organisations surveyed, a striking 99% said they are currently leveraging GenAI applications or workloads, such as AI-powered chatbots, code co-pilots and tools for clinical development automation. This sector now leads all other industries in GenAI adoption, the report found. However, nearly as many respondents—96%—admitted their existing data security and governance were not robust enough to support GenAI at scale. Additionally, 99% say scaling from pilot or development to production remains a serious challenge, with integration into existing IT systems cited as the most significant barrier to wider deployment. "In healthcare, every decision we make has a direct impact on patient outcomes - including how we evolve our technology stack," said Jon Edwards, Director IS Infrastructure Engineering at Legacy Health. "We took a close look at how to integrate GenAI responsibly, and that meant investing in infrastructure that supports long-term innovation without compromising on data privacy or security. We're committed to modernising our systems to deliver better care, drive efficiency, and uphold the trust that patients place in us." Patient data privacy and security concerns underpin much of this hesitation. The number one challenge flagged by healthcare leaders is the task of integrating GenAI with legacy IT infrastructure (79%), followed by the continued existence of data silos (65%) and ongoing obstacles in developing cloud-native applications and containers (59%). Infrastructure modernisation lags adoption The report stresses that while GenAI uptake is high, inadequate IT modernisation could impede progress. Scaling modern applications such as GenAI requires updated infrastructure solutions capable of handling complex data security, integrity, and resilience demands. Respondents overwhelmingly agree more must be done in this area. Key findings also indicate that improving foundational data security and governance will remain an ongoing priority. Ninety-six percent agree their organisations could still improve the security of their GenAI models and applications, while fears around using large language models (LLMs)—especially with sensitive healthcare data—are prevalent. Scott Ragsdale, Senior Director, Sales - Healthcare & SLED at Nutanix, described the recent surge in GenAI adoption as a departure from healthcare's traditional technology adoption timeline. "While healthcare has typically been slower to adopt new technologies, we've seen a significant uptick in the adoption of GenAI, much of this likely due to the ease of access to GenAI applications and tools. Even with such large adoption rates by organisations, there continue to be concerns given the importance of protecting healthcare data. Although all organisations surveyed are using GenAI in some capacity, we'll likely see more widespread adoption within those organisations as concerns around privacy and security are resolved." Nearly all healthcare respondents (99%) acknowledge difficulties in moving GenAI workloads to production, driven chiefly by the challenge of integrating with existing systems. This indicates that, despite wide experimentation and early deployments, many organisations remain cautious about full-scale rollouts. Containers and cloud-native trends In addition to GenAI, the survey found a rapid expansion in the use of application containerisation and Kubernetes deployments across healthcare. Ninety-nine percent of respondents said they are at least in the process of containerising applications, and 92% note distinct benefits from cloud-native application adoption, such as improved agility and security. Container-based infrastructure is viewed as crucial for enabling secure, seamless access to both patient and business data over hybrid and multicloud environments. As a result, many healthcare IT decision-makers are expected to prioritise modern deployment strategies involving containers for both new and existing workloads. Respondents continue to see GenAI as a path towards improved productivity, automation and efficiency, with major use cases involving customer support chatbots, experience solutions, and code generation tools. Yet, the sector remains grappling with the challenges of scale, security, and complexity inherent to these new technologies. The Nutanix study was conducted by Vanson Bourne in Autumn 2024 and included perspectives from across the Americas, EMEA and Asia-Pacific-Japan.

Cloudflare makes AI crawlers opt-in, giving power to creators
Cloudflare makes AI crawlers opt-in, giving power to creators

Techday NZ

time5 hours ago

  • Techday NZ

Cloudflare makes AI crawlers opt-in, giving power to creators

Cloudflare has introduced a default setting to block AI crawlers from accessing web content without explicit permission, making it the first internet infrastructure provider to take this step. With this new measure, website owners using Cloudflare's services will have the choice to allow or block AI crawlers, moving from a previous opt-out system to an opt-in approach. This change is designed to address issues concerning the unauthorised scraping and usage of web content by AI companies for purposes such as training and inference, often without the knowledge or compensation of the content creators. Permission-based controls Under the new system, AI companies are now required to disclose the purpose of their crawlers, specifying whether they are used for training, inference, or search. This allows website owners to make more informed decisions about which bots may access their data. Cloudflare is also developing a "Pay Per Crawl" feature that will give content creators the ability to request payment from AI companies for access to their content, which could generate new revenue streams for publishers. Cloudflare's Chief Executive Officer and Co-founder, Matthew Prince, stated: "If the Internet is going to survive the age of AI, we need to give publishers the control they deserve and build a new economic model that works for everyone – creators, consumers, tomorrow's AI founders, and the future of the web itself. Original content is what makes the Internet one of the greatest inventions in the last century, and it's essential that creators continue making it. AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate. This is about safeguarding the future of a free and vibrant Internet with a new model that works for everyone." This revised approach follows previous Cloudflare initiatives to block AI crawlers, which began with a one-click option introduced in September 2024. Since then, more than one million customers have chosen to restrict AI crawlers from their websites. Now, blocking occurs by default for all new customers, eliminating the need for domain owners to adjust settings to prevent unauthorised crawling. Support from publishers Prominent media organisations and publishers have expressed support for Cloudflare's move, including ADWEEK, SkyNews, Fortune, The Associated Press, BuzzFeed, The Atlantic, TIME, Reddit, and Pinterest. These companies have advocated for fair compensation frameworks and greater transparency around how content is accessed and used by AI platforms. Roger Lynch, Chief Executive Officer of Condé Nast, commented: "Cloudflare's innovative approach to block AI crawlers is a game-changer for publishers and sets a new standard for how content is respected online. When AI companies can no longer take anything they want for free, it opens the door to sustainable innovation built on permission and partnership. This is a critical step toward creating a fair value exchange on the Internet that protects creators, supports quality journalism and holds AI companies accountable." Neil Vogel, Chief Executive Officer of Dotdash Meredith, added: "We have long said that AI platforms must fairly compensate publishers and creators to use our content. We can now limit access to our content to those AI partners willing to engage in fair arrangements. We're proud to support Cloudflare and look forward to using their tools to protect our content and the open web." Renn Turiano, Chief Consumer and Product Officer at Gannett Media, also noted: "As the largest publisher in the country, comprised of USA TODAY and over 200 local publications throughout the USA TODAY Network, blocking unauthorised scraping and the use of our original content without fair compensation is critically important. As our industry faces these challenges, we are optimistic the Cloudflare technology will help combat the theft of valuable IP." Bill Ready, Chief Executive Officer of Pinterest, said: "Creators and publishers around the world leverage Pinterest to expand their businesses, reach new audiences and directly measure their success. As AI continues to reshape the digital landscape, we are committed to building a healthy Internet infrastructure where content is used for its intended purpose, so creators and publishers can thrive." Steve Huffman, Co-founder and Chief Executive Officer of Reddit, stated: "AI companies, search engines, researchers, and anyone else crawling sites have to be who they say they are. And any platform on the web should have a say in who is taking their content for what. The whole ecosystem of creators, platforms, web users and crawlers will be better when crawling is more transparent and controlled, and Cloudflare's efforts are a step in the right direction for everyone." Vivek Shah, Chief Executive Officer of Ziff Davis, commented: "We applaud Cloudflare for advocating for a sustainable digital ecosystem that benefits all stakeholders — the consumers who rely on credible information, the publishers who invest in its creation, and the advertisers who support its dissemination." Industry consortia and authentication Cloudflare is also participating in the development of new technical protocols to allow AI bots to authenticate themselves and for website owners to reliably determine the identity and intent of incoming requests. This aims to improve overall transparency and control over the use of web content by automated agents. Additional media and technology companies have added their support, indicating a broad industry move towards permission-based AI access to digital content. The list includes companies such as The Arena Group, Atlas Obscura, Quora, Stack Overflow, Universal Music Group, O'Reilly Media, and others. This change comes as publishers report reduced website traffic and declining advertising revenues linked to AI platforms generating answers directly to user queries without referencing or referring traffic to the original sources. Cloudflare's new default blocking of AI crawlers aims to restore a value exchange between content creators, consumers, and technology companies as artificial intelligence continues to shape the internet landscape.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store