
Nutanix Study Finds Public Sector Embraces Generative AI, But Faces Security, Skills, And Infrastructure Gaps
Sydney, NSW – June 18, 2025 – Nutanix (NASDAQ: NTNX), a leader in hybrid multicloud computing, announced the findings of its seventh annual global Public Sector Enterprise Cloud Index (ECI) survey and research report, which measures enterprise progress with cloud adoption in the industry. The research showed that 83 per cent of public sector organisations have a GenAI strategy in place, with 54 per cent actively implementing, and 29 per cent preparing for implementation.
As public sector organisations ramp up GenAI adoption, 76 per cent of IT decision-makers say their current infrastructure needs moderate to significant improvement to support modern, cloud native applications at scale. This year's public sector ECI found that infrastructure modernisation emerged as a top priority, underscoring the growing demand for systems capable of meeting GenAI's requirements for enterprise-ready data security, data integrity, and resilience.
This year's report also revealed that public sector leaders are increasingly leveraging GenAI applications/workloads into their organisations. Real-world GenAI use cases across the public sector gravitate towards constituent/employee support and experience solutions (e.g., chatbots) and content generation. However, concerns remain with 92 per cent of public sector leaders highlighting the need for their organisations to do more to secure GenAI models and applications. The results of that need, according to 96 per cent of respondents, is security and privacy becoming higher priorities for their organisations.
'Generative AI is no longer a future concept, it's already transforming how we work,' said Greg O'Connell, VP, Federal Sales, Public Sector at Nutanix. '94 per cent of public sector organisations are already putting AI to work and expect returns in as little as one year. As public sector leaders look to see outcomes, now is the time to invest in AI-ready infrastructure, data security, privacy, and training to ensure long-term success.'
Public sector survey respondents were asked about GenAI adoptions and trends, Kubernetes and containers, how they're running business and mission critical applications today, and where they plan to run them in the future. Key findings from this year's report include:
GenAI solution adoption and deployment in the public sector will necessitate a more comprehensive approach to data security. Public sector respondents indicate a significant amount of work needs to be done to improve the foundational levels of data security/governance required to support GenAI solution implementation and success. 92 per cent of public sector respondents agree that their organisation could be doing more to secure its GenAI models and applications. Luckily, many IT decision-makers in the public sector are aware of this impending sea change, with 96 per cent of respondents agreeing that GenAI is changing their organisation's priorities, with security and privacy becoming higher priorities.
Prioritise infrastructure modernisation to support GenAI at scale across public sector organisations. Running modern applications at enterprise scale requires infrastructure solutions that can support the necessary requirements for complex data security, data integrity, and resilience. Unfortunately, 76 per cent of respondents in the public sector believe their current IT infrastructure requires at least moderate improvement to fully support cloud native apps/containers. Furthermore, IT infrastructure investment was ranked as a top area of improvement among public sector respondents, a sign that IT decision-makers are aware of the need to improve.
GenAI solution adoption in the public sector continues at a rapid pace, but there are still challenges to overcome. When it comes to GenAI adoption, public sector metrics show progress, with 94 per cent of respondents saying their organisation is leveraging GenAI applications/workloads today. Most public sector organisations believe GenAI solutions will help improve levels of productivity, automation, and efficiency. However, organisations in the public sector also note a range of challenges and potential hindrances regarding GenAI solution development and deployment, including data security and privacy, and the need for continued upskilling and hiring to support new GenAI projects/solutions.
Application containerisation and Kubernetes deployment are expanding across the public sector. Application containerisation is increasingly pervasive across industry sectors and is set to expand in adoption across the public sector as well, with 96 per cent of segment respondents saying their organisation is at least in the process of containerising applications. This trend may be driven by the fact that 91 per cent of respondents in the public sector agree their organisation benefits from adopting cloud native applications/containers.
For the seventh consecutive year, Nutanix commissioned a global research study to learn about the state of global enterprise cloud deployments, application containerisation trends, and GenAI application adoption. In the Fall of 2024, U.K. researcher Vanson Bourne surveyed 1,500 IT and DevOps/Platform Engineering decision-makers around the world. The respondent base spanned multiple industries, business sizes, and geographies, including North and South America; Europe, the Middle East and Africa (EMEA); and Asia-Pacific-Japan (APJ) region.
To learn more about the report and findings, please download the full Public Sector Nutanix Enterprise Cloud Index, here and read more in the blog here.
About Nutanix
Nutanix is a global leader in cloud software, offering organisations a single platform for running applications and managing data, anywhere. With Nutanix, companies can reduce complexity and simplify operations, freeing them to focus on their business outcomes. Building on its legacy as the pioneer of hyperconverged infrastructure, Nutanix is trusted by companies worldwide to power hybrid multicloud environments consistently, simply, and cost-effectively.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
10 hours ago
- Techday NZ
C-suite divisions slow GenAI adoption due to security worries
A new report from NTT DATA has highlighted a misalignment among senior executives regarding the adoption and security implications of generative artificial intelligence (GenAI) in organisations globally. NTT DATA's report, "The AI Security Balancing Act: From Risk to Innovation," is based on survey responses from more than 2,300 senior GenAI decision makers, including over 1,500 C-level executives across 34 countries. The findings underscore a gap between the optimism of CEOs and the caution of Chief Information Security Officers (CISOs) concerning GenAI deployment. C-Suite perspectives The report indicates that 99% of C-Suite executives are planning to increase their GenAI investments over the next two years, with 67% of CEOs preparing for significant financial commitments. In comparison, 95% of Chief Information Officers (CIOs) and Chief Technology Officers (CTOs) report that GenAI is already influencing, or will soon drive, greater spending on cybersecurity initiatives. Improved security was named among the top three benefits realised from GenAI adoption in the past year. Despite these high expectations, a considerable number of CISOs express reservations. Nearly half (45%) of CISOs surveyed shared negative sentiments about GenAI rollouts, identifying security gaps and the challenge of modernising legacy infrastructure as primary barriers. The report also finds differences in the perception of policy clarity. More than half of CISOs (54%) stated that internal GenAI policies are unclear, compared with just 20% of CEOs. This suggests a disconnect between business leaders' strategic vision and concerns raised by operational security managers. "As organisations accelerate GenAI adoption, cybersecurity must be embedded from the outset to reinforce resilience. While CEOs champion innovation, ensuring seamless collaboration between cybersecurity and business strategy is critical to mitigating emerging risks," said Sheetal Mehta, Senior Vice President and Global Head of Cybersecurity at NTT DATA, Inc. "A secure and scalable approach to GenAI requires proactive alignment, modern infrastructure and trusted co-innovation to protect enterprises from emerging threats while unlocking AI's full potential." Operational and skills challenges The study highlights that, while 97% of CISOs consider themselves GenAI decision makers, 69% acknowledge their teams currently lack the necessary skills to work effectively with GenAI technologies. Only 38% of CISOs said their organisation's GenAI and cyber security strategies are aligned, compared with 51% of CEOs. Another area of concern identified is the absence of clearly defined policies for GenAI use within organisations. According to the survey, 72% of respondents had yet to implement a formal GenAI usage policy, and just 24% of CISOs strongly agreed their company has an adequate framework for balancing the risks and rewards of GenAI adoption. Infrastructure and technology barriers Legacy technology also poses a significant challenge to GenAI integration. The research found that 88% of security leaders believe outdated infrastructure is negatively affecting both business agility and GenAI readiness. Upgrading systems such as Internet of Things (IoT), 5G, and edge computing was identified as crucial for future progress. To address these obstacles, 64% of CISOs reported prioritising collaboration with strategic IT partners and co-innovation, rather than relying on proprietary AI solutions. When choosing GenAI technology partners, security leaders ranked end-to-end service integration as their most important selection criterion. "Collaboration is highly valued by line-of-business leaders in their relationships with CISOs. However, disconnects remain, with gaps between the organisation's desired risk posture and its current cybersecurity capabilities," said Craig Robinson, Research Vice President, Security Services at IDC. "While the use of GenAI clearly provides benefits to the enterprise, CISOs and Global Risk and Compliance leaders struggle to communicate the need for proper governance and guardrails, making alignment with business leaders essential for implementation." Survey methodology The report's data derives from a global survey of 2,300 senior GenAI decision makers. Of these respondents, 68% were C-suite executives, with the remainder comprising vice presidents, heads of department, directors, and senior managers. The research, conducted by Jigsaw Research, aimed to capture perspectives on both the opportunities and risks associated with GenAI across different regions and sectors. The report points to the need for structured governance, clarity in strategic direction, and investment in modern infrastructure to ensure successful and secure GenAI deployments in organisations.


Techday NZ
11 hours ago
- Techday NZ
Cloud Security Alliance launches pledge for responsible AI use
The Cloud Security Alliance has introduced the AI Trustworthy Pledge, aiming to promote responsible and transparent development of artificial intelligence. The initiative is designed to address ongoing concerns regarding AI governance, including issues such as AI-generated misinformation, privacy risks, and ethical challenges that have come to the forefront as artificial intelligence is increasingly embedded in commercial and governmental decision-making. The Cloud Security Alliance (CSA), an organisation known for defining standards, certifications, and best practices for cloud security, stated that the AI Trustworthy Pledge serves as a public commitment to advance the responsible development and management of AI technologies. The Pledge forms part of the organisation's broader efforts under its AI Safety Initiative. This move follows recognition that previous approaches, where products are built before comprehensive risk and security considerations, are insufficient for the complexities posed by AI systems. The CSA emphasised the necessity for proactive frameworks that prioritise trust and accountability from the outset. The AI Trustworthy Pledge outlines four foundational principles for organisations engaged in AI-related activities. Participating organisations commit to safety and compliance, transparency, ethical accountability, and privacy protection across the lifecycle of AI design, deployment, and management. According to the CSA, the initiative begins with voluntary adoption by industry and is intended to pave the way for more formal standards and certification processes, including the forthcoming STAR for AI initiative. This later phase will establish detailed cybersecurity and trustworthiness requirements for generative AI services. "The decisions we make today around AI governance, ethics, and security will shape not only the future of our organizations and our industry, but of society at large. The AI Trustworthy Pledge provides a tangible opportunity to lead in this space, not just by managing risk, but by actively driving responsible innovation and helping to establish the industry standards of tomorrow," said Jim Reavis, CEO and co-founder, Cloud Security Alliance. Organisations who sign the pledge are required to ensure their AI systems adhere to several guidelines. These include prioritising user safety and compliance with applicable regulations, maintaining transparency about AI systems in use, ensuring ethical development that allows for explainable outcomes, and upholding rigorous privacy protections for personal data. Initial signatories include Airia, Endor Labs, Deloitte Consulting Srl S.B., Okta, Reco, Redblock, Securiti AI, Whistic, and Zscaler alongside others that have signalled their commitment to responsible AI practices through participation in the pledge. These organisations will be provided with a digital badge to signal their adherence to the outlined commitments. Principles outlined The CSA's AI Trustworthy Pledge is centred on four key principles. Firstly, safety and compliance require that organisations implement AI solutions that place user safety at the forefront and adhere to regulatory requirements. Secondly, transparency expects organisations to be open about the AI systems they employ in order to foster greater trust. Thirdly, ethical accountability is intended to ensure fairness and the ability to explain how AI-derived outcomes are determined. Lastly, privacy protection requires organisations to maintain strong safeguards over personal data processed by AI systems. By focusing on voluntary, public commitments, the CSA intends to encourage industry-wide adoption of responsible standards before introducing binding certification frameworks. This approach allows for alignment and shared understanding across different sectors and organisations as AI usage expands. Following the pledge's introduction, the CSA plans to launch the STAR for AI initiative. This will create detailed standards for cybersecurity and trust in generative AI, building on the early foundations laid by the Trustworthy Pledge. The announcement comes as organisations worldwide continue to debate appropriate regulatory, security, and ethical measures as AI technologies evolve. By establishing the Pledge, CSA aims to encourage dialogue and collective action among stakeholders on the responsible use of artificial intelligence.


Techday NZ
11 hours ago
- Techday NZ
Cloud Security Alliance launches Valid-AI-ted tool for STAR checks
The Cloud Security Alliance has launched Valid-AI-ted, an AI-powered tool providing automated quality checks of STAR Level 1 self-assessments for cloud service providers. Valid-AI-ted integrates large language model (LLM) technology to offer an automated assessment of assurance information in the STAR Registry, aiming to improve transparency and trust in cloud security declarations. Jim Reavis, Chief Executive Officer and Co-Founder, Cloud Security Alliance, said, "With agile, vendor-neutral programs and a global network of industry experts, CSA is uniquely positioned to develop authoritative AI tools that address the real-world challenges of cloud service providers. Our focus on security-conscious innovation led to the creation of Valid-AI-ted and will continue to see us deliver forward-looking initiatives that will push the boundaries of secure, AI-driven technology." CSA members can use Valid-AI-ted without charge and submit assessments as frequently as needed. Non-member providers are limited to ten resubmissions and can remediate their entries based on feedback provided by the tool. If assessments meet the required standard, providers receive a STAR Level 1 Valid-AI-ted badge for display on the STAR Registry as well as their own platforms. Assessment process Valid-AI-ted uses AI-driven evaluation to systematically grade responses to the STAR Level 1 questionnaire, producing detailed reports with scores for each question and domain. Reports are delivered privately to the submitter and contain granular feedback that identifies strengths and areas for improvement. The automation, according to CSA, is unique in the cloud security assurance landscape, as it offers objective, rapid, and scalable validation of self-assessment submissions. The process utilises a standardised scoring model informed by the Cloud Controls Matrix (CCM), which underpins CSA's approach to cloud security best practices. A key feature of Valid-AI-ted is the opportunity for continuous improvement. The ability for organisations to revise and resubmit assessments is highlighted as beneficial for those seeking STAR certification or looking to enhance their transparency among customers and regulators. Comparative advantages CSA highlights several advantages of Valid-AI-ted when compared to traditional STAR Level 1 evaluations. The tool is intended to improve assurance by reducing variability in the quality of responses, as traditionally, customer interpretation is required when reviewing self-assessment answers. With Valid-AI-ted, users receive qualitative analysis and actionable feedback aligned with established CCM guidance. This approach is positioned to support organisations in maturing their processes and can serve as a stepping stone towards the more rigorous STAR Level 2 third-party assessments. The STAR Level 1 Valid-AI-ted badge, awarded to successful assessment submissions, is intended to offer heightened recognition for providers. CSA says this distinction can help providers stand out to customers, partners, and regulators by demonstrating a commitment to more than basic compliance requirements. STAR Registry context The STAR Registry is an online resource that publicly lists the security and privacy controls of cloud providers. It enables organisations to demonstrate compliance with various regulations and standards while supporting transparency and reducing the need for multiple customer questionnaires. The registry is based on principles detailed in the Cloud Controls Matrix, including transparency, auditing, and harmonisation of standards. The Valid-AI-ted tool and STAR Level 1 evaluations are part of a suite of assessments that build on these principles, aiming to support both providers and customers in understanding cloud security postures. Licensing and integration Solution providers interested in incorporating Valid-AI-ted grading into governance, risk, and compliance (GRC) solutions can obtain access to the relevant scoring rubric and prompts by securing a CCM licence from CSA. While Valid-AI-ted is available to CSA members at no charge, non-members can access the service for $595. Discounts are also available for participants attending CSA's Cloud Trust Summit, who will be provided with a code for a $200 reduction in fees through the end of June. With the launch of Valid-AI-ted, CSA seeks to provide automated, standardised, and actionable assurance assessment, utilising AI to address the evolving demands of cloud security and compliance.