
C-suite divisions slow GenAI adoption due to security worries
NTT DATA's report, "The AI Security Balancing Act: From Risk to Innovation," is based on survey responses from more than 2,300 senior GenAI decision makers, including over 1,500 C-level executives across 34 countries. The findings underscore a gap between the optimism of CEOs and the caution of Chief Information Security Officers (CISOs) concerning GenAI deployment.
C-Suite perspectives
The report indicates that 99% of C-Suite executives are planning to increase their GenAI investments over the next two years, with 67% of CEOs preparing for significant financial commitments. In comparison, 95% of Chief Information Officers (CIOs) and Chief Technology Officers (CTOs) report that GenAI is already influencing, or will soon drive, greater spending on cybersecurity initiatives. Improved security was named among the top three benefits realised from GenAI adoption in the past year.
Despite these high expectations, a considerable number of CISOs express reservations. Nearly half (45%) of CISOs surveyed shared negative sentiments about GenAI rollouts, identifying security gaps and the challenge of modernising legacy infrastructure as primary barriers.
The report also finds differences in the perception of policy clarity. More than half of CISOs (54%) stated that internal GenAI policies are unclear, compared with just 20% of CEOs. This suggests a disconnect between business leaders' strategic vision and concerns raised by operational security managers. "As organisations accelerate GenAI adoption, cybersecurity must be embedded from the outset to reinforce resilience. While CEOs champion innovation, ensuring seamless collaboration between cybersecurity and business strategy is critical to mitigating emerging risks," said Sheetal Mehta, Senior Vice President and Global Head of Cybersecurity at NTT DATA, Inc. "A secure and scalable approach to GenAI requires proactive alignment, modern infrastructure and trusted co-innovation to protect enterprises from emerging threats while unlocking AI's full potential."
Operational and skills challenges
The study highlights that, while 97% of CISOs consider themselves GenAI decision makers, 69% acknowledge their teams currently lack the necessary skills to work effectively with GenAI technologies. Only 38% of CISOs said their organisation's GenAI and cyber security strategies are aligned, compared with 51% of CEOs.
Another area of concern identified is the absence of clearly defined policies for GenAI use within organisations. According to the survey, 72% of respondents had yet to implement a formal GenAI usage policy, and just 24% of CISOs strongly agreed their company has an adequate framework for balancing the risks and rewards of GenAI adoption.
Infrastructure and technology barriers
Legacy technology also poses a significant challenge to GenAI integration. The research found that 88% of security leaders believe outdated infrastructure is negatively affecting both business agility and GenAI readiness. Upgrading systems such as Internet of Things (IoT), 5G, and edge computing was identified as crucial for future progress.
To address these obstacles, 64% of CISOs reported prioritising collaboration with strategic IT partners and co-innovation, rather than relying on proprietary AI solutions. When choosing GenAI technology partners, security leaders ranked end-to-end service integration as their most important selection criterion. "Collaboration is highly valued by line-of-business leaders in their relationships with CISOs. However, disconnects remain, with gaps between the organisation's desired risk posture and its current cybersecurity capabilities," said Craig Robinson, Research Vice President, Security Services at IDC. "While the use of GenAI clearly provides benefits to the enterprise, CISOs and Global Risk and Compliance leaders struggle to communicate the need for proper governance and guardrails, making alignment with business leaders essential for implementation."
Survey methodology
The report's data derives from a global survey of 2,300 senior GenAI decision makers. Of these respondents, 68% were C-suite executives, with the remainder comprising vice presidents, heads of department, directors, and senior managers. The research, conducted by Jigsaw Research, aimed to capture perspectives on both the opportunities and risks associated with GenAI across different regions and sectors.
The report points to the need for structured governance, clarity in strategic direction, and investment in modern infrastructure to ensure successful and secure GenAI deployments in organisations.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
an hour ago
- Techday NZ
Hexaware launches Agentic AI Academy to upskill global workforce
Hexaware Technologies has announced the launch of the Agentic AI Academy in partnership with upGrad Enterprise to prepare its employees for the next phase of enterprise artificial intelligence focused on Agentic AI systems. The new initiative builds on Hexaware's extensive generative AI skills programme, which has already introduced 95 percent of its global workforce to foundational and advanced GenAI concepts. With the field of AI developing rapidly, the Agentic AI Academy aims to ensure Hexaware personnel are well-placed for the impact of autonomous, intelligent agent technologies. Agentic focus Agentic systems are autonomous software agents capable of planning, adapting, and acting in real time. Such technologies are shaping how organisations design and deliver work, driving change in automation and orchestration while demanding skills that go beyond traditional development and IT delivery roles. The Academy is set to provide structured and role-based learning pathways, practical applications, and is part of a strategy to broaden Agentic capabilities across Hexaware's delivery and engineering teams in coming months. As AI continues to alter enterprise functions, companies are increasingly required to equip staff for future demands in a competitive digital landscape. Three learning tracks The Agentic AI Academy curriculum is divided into three specific categories: aimed at developers and engineers who are building agent architectures with a focus on orchestration, integration and safety protocols. intended for teams adapting and customising agent solutions within client environments. for delivery and operations leaders handling agent-human interaction, quality assurance, and live monitoring. Participants in each track follow a unified approach to content and certification, with materials developed and delivered via upGrad Enterprise's learning platform. This setup brings together content, hands-on labs, and certifications all within the flow of employees' daily work. Program design The Agentic AI Academy's learning programmes and custom content have been co-designed with upGrad Enterprise. The training leverages upGrad's instructional design expertise and its subject matter expert networks, ensuring alignment with Hexaware's business and customer objectives. Vinod Chandran, Chief Operating Officer, Hexaware, commented: Agentic technologies are becoming fundamental to how enterprises operate. Agentic Academy ensures our people are equipped not only to use these systems but to lead engagements built around them. Satyajith Mundakkal, Chief Technology Officer, Hexaware, also commented on the programme's ambition: We're moving from automation to intelligent orchestration, where agents collaborate, decide, and deliver outcomes. This shift is redefining roles and creating demand for Agentic expertise. Our goal is to equip talent to lead real-world Agentic assignments with confidence and capability. Scaling up The rollout is already underway, with participants from the first batches engaged in live assignments using Agentic systems. Additional learning tracks are also being prepared as the programme scales. Satyendu Mohanty, EVP & Global Head – Talent Management at Hexaware, explained the importance of the Academy for large-scale transformation: For such a large-scale delivery transformation, enabling a hybrid workforce of humans and agents to deliver shared outcomes, rapid talent transformation is imperative. Agentic Academy enables a common language, shared fluency, and real capability across roles. That's what turns a technology trend into a sustained workforce advantage. Learning partnership Srikanth Iyengar, Chief Executive Officer, upGrad Enterprise, described the joint approach with Hexaware: As Agentic systems reshape enterprise operations, the real differentiator will be how quickly organizations can translate that shift into customer impact. At upGrad Enterprise, our experience in workforce transformation and learning design allows us to build agile, real-world solutions at scale. This program is a reflection of that approach - practical, outcome-driven, and built for deployment from day one. Hexaware anticipates that a majority of its delivery and engineering teams will participate in the Agentic AI Academy over the coming months as the company extends its Agentic AI service offerings to clients across multiple industries. Follow us on: Share on:


Techday NZ
4 hours ago
- Techday NZ
Google Cloud unveils advanced AI security tools & SOC updates
Google Cloud has announced new security solutions and enhanced capabilities focused on securing AI initiatives and supporting defenders in the context of growing enterprise adoption of artificial intelligence technologies. With the introduction of AI across various sectors, organisations are increasingly concerned with the risks presented by sophisticated AI agents. Google Cloud has responded by expanding on the security measures available within its Security Command Centre, emphasising protection for AI agents and ecosystems using tools such as Sensitive Data Protection and Model Armour. According to Jon Ramsey, Vice President and General Manager, Google Cloud Security, "AI presents an unprecedented opportunity for organizations to redefine their security posture and reduce the greatest amount of risk for the investment. From proactively finding zero-day vulnerabilities to processing vast amounts of threat intelligence data in seconds to freeing security teams from toilsome work, AI empowers security teams to achieve not seen before levels of defence and efficiency." Expanded protection for agentic AI Google Cloud has detailed three new capabilities for securing AI agents in Google Agentspace and Google Agent Builder. The first, expanded AI agent inventory and risk identification, will enable automated discovery of AI agents and Model Context Protocol (MCP) servers. This feature aims to help security teams quickly identify vulnerabilities, misconfigurations, and high-risk interactions across their AI agent estate. The second, advanced in-line protection and posture controls, extends Model Armour's real-time security assurance to Agentspace prompts and responses. This enhancement is designed to provide controls against prompt injection, jailbreaking, and sensitive data leakage during agent interactions. In parallel, the introduction of specialised posture controls will help AI agents adhere to defined security policies and standards. Proactive threat detection rounds out these developments, introducing detections for risky behaviours and external threats to AI agents. These detections, supported by intelligence from Google and Mandiant, assist security teams in responding to anomalous and suspicious activity connected to AI agents. Agentic security operations centre Google Cloud is advancing its approach to security operations through an 'agentic SOC' vision in Google Security Operations, which leverages AI agents to enhance efficiency and detection capabilities. By automating processes such as data pipeline optimisation, alert triage, investigation, and response, Google Cloud aims to address traditional gaps in detection engineering workflows. "We've introduced our vision of an agentic security operations center (SOC) that includes a system where agents can coordinate their actions to accomplish a shared goal. By offering proactive, agent-supported defense capabilities built on optimizing data pipelines, automating alert triage, investigation, and response, the agentic SOC can streamline detection engineering workflows to address coverage gaps and create new threat-led detections." The new Alert Investigation agent, currently in preview, is capable of autonomously enriching events, analysing command-line interfaces, and building process trees. It produces recommendations for next steps and aims to reduce the manual effort and response times for security incidents. Expert guidance and consulting Google Cloud's Mandiant Consulting arm is extending its AI consulting services in response to demand for robust governance and security frameworks in AI deployments. These services address areas such as risk-based AI governance, pre-deployment environment hardening, and comprehensive threat modelling. Mandiant Consulting experts noted, "As more organizations lean into using generative and agentic AI, we've seen a growing need for AI security consulting. Mandiant Consulting experts often encounter customer concerns for robust governance frameworks, comprehensive threat modeling, and effective detection and response mechanisms for AI applications, underscoring the importance of understanding risk through adversarial testing." Clients working with Mandiant can access pre-deployment security assessments tailored to AI and benefit from continuous updates as threats evolve. Unified platform enhancements Google Unified Security, a platform integrating Google's security solutions, now features updates in Google Security Operations and Chrome Enterprise. Within Security Operations, the new SecOps Labs offers early access to AI-powered experiments related to parsing, detection, and response, many of which use Google Gemini technology. Dashboards with native security orchestration, automation, and response (SOAR) data integration are now generally available, reflecting user feedback from previous previews. On the endpoint side, Chrome Enterprise enhancements bring secured browsing to mobile, including Chrome on iOS, with features such as easy account separation and URL filtering. This allows companies to block access to unauthorised AI sites and provides enhanced reporting for investigation and compliance purposes. Trusted Cloud and compliance Recent updates in Trusted Cloud focus on compliance and data security. Compliance Manager, now in preview, enables unified policy configuration and extensive auditing within Google Cloud. Data Security Posture Management, also in preview, delivers governance for sensitive data and integrates natively with BigQuery Security Centre. The Security Command Centre's Risk Reports can now summarise unique cloud security risks to inform both security specialists and broader business stakeholders. Updates in identity management include Agentic IAM, launching later in the year, which will facilitate agent identities across environments to simplify credential management and authorisation for both human and non-human agents. Additionally, the IAM role picker powered by Gemini, currently in preview, assists administrators in granting least-privileged access through natural language queries. Enhanced Sensitive Data Protection now monitors assets in Vertex AI, BigQuery, and CloudSQL, with improvements in image inspection for sensitive data and additional context model detection. Network security innovations announced include expanded tag support for Cloud NGFW, Zero Trust networking for RDMA networks in preview, and new controls for Cloud Armour, such as hierarchical security policies and content-based WAF inspection updates. Commitment to responsible AI security Jon Ramsey emphasised Google Cloud's aim to make security a business enabler: "The innovations we're sharing today at Google Cloud Security Summit 2025 demonstrate our commitment to making security an enabler of your business ambitions. By automating compliance, simplifying access management, and expanding data protection for your AI workloads, we're helping you enhance your security posture with greater speed and ease. Further, by using AI to empower your defenders and meticulously securing your AI projects from inception to deployment, Google Cloud provides the comprehensive foundation you need to thrive in this new era."


Techday NZ
a day ago
- Techday NZ
AI bots drive 80% of bot traffic, straining web resources
Fastly has published its Q2 2025 Threat Insights Report, which documents considerable changes in the sources and impact of automated web traffic, highlighting the dominance of AI crawlers and the emergence of notable regional trends. AI crawler surge The report, covering activity from mid-April to mid-July 2025, identifies that AI crawlers now constitute almost 80% of all AI bot traffic. Meta is responsible for more than half of this figure, significantly surpassing Google and OpenAI in total AI crawling activity. According to Fastly, Meta bots generate 52% of observed AI crawler interactions, while Google and OpenAI represent 23% and 20% respectively. Fetcher bots, which access website content in response to user prompts - including those employed by ChatGPT and Perplexity - have led to exceptional real-time request rates. In some instances, fetcher request volumes have reached over 39,000 requests per minute. This phenomenon is noted as placing considerable strain on web infrastructure, increasing bandwidth usage, and overwhelming servers, a scenario that mirrors distributed denial-of-service attacks, though not motivated by malicious intent. Geographic concentration North America receives a disproportionate share of AI crawler traffic, accounting for almost 90% of such interactions, leaving a relatively minor portion for Europe, Asia, and Latin America. This imbalance raises concerns over the geographic bias in datasets used to train large language models, and whether this bias could shape the neutrality and fairness of AI-generated outputs in the future. The findings build on Fastly's Q1 2025 observations, which indicated automated bot activity represented 37% of network traffic. While volume was previously the chief concern, Fastly's latest data suggests that the current challenge lies in understanding the evolving complexity of bot-driven activity, particularly regarding AI-generated content scraping and high-frequency access patterns. Industry-wide implications Fastly's research, compiled from an analysis of 6.5 trillion monthly requests across its security solutions, presents a comprehensive overview of how AI bots are affecting a range of industries, including eCommerce, media and entertainment, financial services, and technology. Commerce, media, and high-tech sectors face the highest incidence of content scraping, which is largely undertaken for training AI models. ChatGPT in particular is cited as driving the most real-time website traffic among fetcher bots, accounting for 98% of related requests. Fastly also notes that a continuing lack of bot verification standards makes it difficult for security teams to distinguish between legitimate automation and attempts at impersonation. According to the report, this gap creates risks for operational resilience and poses challenges for detecting and managing unverified automation traffic. Verification and visibility "AI Bots are reshaping how the internet is accessed and experienced, introducing new complexities for digital platforms," said Arun Kumar, Senior Security Researcher at Fastly. "Whether scraping for training data or delivering real-time responses, these bots create new challenges for visibility, control, and cost. You can't secure what you can't see, and without clear verification standards, AI-driven automation risks are becoming a blind spot for digital teams. Businesses need the tools and insights to manage automated traffic with the same precision and urgency as any other infrastructure or security risk." The report recommends increased transparency in bot verification, more explicit identification by bot operators, and refined management strategies for handling automated traffic. In the absence of such measures, organisations may encounter rising levels of unaccounted-for automation, difficulties in attributing online activity, and escalating infrastructure expenses.