
AI adoption boosts productivity across New Zealand businesses
The survey, which covered 200 senior business leaders, indicated that 87 percent of New Zealand businesses now use some form of AI in their operations, compared to 66 percent in 2024 and 48 percent in 2023. Among larger organisations with over 200 employees, AI usage was found to be even higher at 92 percent.
Productivity gains reported
Productivity improvements were cited as the most common benefit of AI adoption. According to the study, 89 percent of AI users reported 'productivity gains'. Digging deeper, 20 percent of organisations said they achieved significant productivity gains - defined as 25 percent or more time saved or increased output - while a further 28 percent saw moderate gains between 10 and 25 percent, and 35 percent reported minor improvements.
Other reported benefits included enhanced decision-making and insights (42 percent), cost reduction (30 percent), staff enablement and retention (29 percent), and improved customer experience (26 percent).
The most widespread applications of AI in New Zealand organisations are automation of repetitive tasks (68 percent), data analytics and reporting (54 percent), workflow optimisation (51 percent), and customer or employee experience enhancement (32 percent). Sixteen percent of organisations surveyed said they are using AI to transform a core aspect of their operations or services. "The business case for AI is increasingly clear and it is encouraging to see New Zealand organisations capitalising on the benefits AI offers," says Datacom New Zealand MD Justin Gray.
Gray noted a shift in focus among organisations towards long-term preparedness, stating, "We're also seeing organisations starting to think in a more long-term way about AI, so they are having conversations with our team about data readiness, whether they have the right cloud environment to managing the increasing data demands, and about the interfaces between their existing applications and AI."
Within Datacom itself, Gray reported over 90 internal AI productivity tools have been integrated. The company has also restructured its digital engineering services to deploy a hybrid workforce of AI agents and human software engineers. This hybrid approach, he said, has enabled rebuilding of legacy systems within shorter timelines and delivered cost savings for customers ranging from 30 to 50 percent.
Challenges in scaling AI
Despite rising adoption rates, the research highlights challenges in scaling AI implementations beyond pilots and departmental use. While a third of respondents have deployed AI at the departmental level, only 12 percent have managed to scale it across their entire organisation. Eight percent reported using AI to transform core operations, and nearly half (46 percent) are still in an exploratory phase, using pilot projects to assess AI's potential.
Datacom Director of AI Lou Compagnone commented on the pace of change, stating, "We have seen significant progress in the past year, with some organisations moving from experimenting with genAI to rolling out agentic solutions in the space of 12 months."
He said many organisations face challenges moving from pilots to large-scale deployment. "There is a difference between being able to pilot AI and scale it successfully across your organisation. Creating a proof of concept with today's consumer AI tools is relatively straightforward, but productionising these solutions reveals critical challenges around data readiness, system integration, security and long-term maintainability."
Compagnone suggested effective scaling should focus on developing operational capability for AI: "That might look like setting up an 'AI Centre of Enablement' or an AI council that has cross-functional representation across the organisation, so they have visibility and coordination over their AI initiatives."
He added, "Success in AI implementation requires having a clear vision for the role AI will play in achieving business objectives, backed by a comprehensive AI strategy with clearly defined initiatives. This strategy should address key pillars such as optimisation of business functions, AI technical foundations, data governance, and talent development."
"Organisations that move beyond experimental projects to establish these strategic frameworks are the ones that will truly transform their operations with AI. Rather than isolated use cases, they create an ecosystem where AI solutions can be developed, deployed and managed at scale, with appropriate governance and measurable business outcomes."
Barriers and concerns
Barriers to broader AI adoption include lack of internal capability or skills (32 percent), issues with data quality or integration (22 percent), and uncertainty over governance or regulation (16 percent). Respondents also cited staff resistance and a lack of internal buy-in as obstacles.
Despite the increasing use of AI, skills training appears limited, with 46 percent of organisations providing AI training in the past six months and 10 percent in the past year. Another 28 percent are planning to provide training. Fifty-five percent of organisations indicated they want best-practice frameworks from the industry, while 40 percent are seeking external training support. Internally, 55 percent have an AI policy, but only 29 percent have formal ethics or safety guidelines in place.
Risks around AI are a notable concern, with 52 percent of leaders identifying "shadow AI" - the use of unapproved tools - as a problem. Other concerns included uncertainty about the implications of AI (80 percent) and loss of control (57 percent).

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
38 minutes ago
- Techday NZ
Google Cloud unveils advanced AI security tools & SOC updates
Google Cloud has announced new security solutions and enhanced capabilities focused on securing AI initiatives and supporting defenders in the context of growing enterprise adoption of artificial intelligence technologies. With the introduction of AI across various sectors, organisations are increasingly concerned with the risks presented by sophisticated AI agents. Google Cloud has responded by expanding on the security measures available within its Security Command Centre, emphasising protection for AI agents and ecosystems using tools such as Sensitive Data Protection and Model Armour. According to Jon Ramsey, Vice President and General Manager, Google Cloud Security, "AI presents an unprecedented opportunity for organizations to redefine their security posture and reduce the greatest amount of risk for the investment. From proactively finding zero-day vulnerabilities to processing vast amounts of threat intelligence data in seconds to freeing security teams from toilsome work, AI empowers security teams to achieve not seen before levels of defence and efficiency." Expanded protection for agentic AI Google Cloud has detailed three new capabilities for securing AI agents in Google Agentspace and Google Agent Builder. The first, expanded AI agent inventory and risk identification, will enable automated discovery of AI agents and Model Context Protocol (MCP) servers. This feature aims to help security teams quickly identify vulnerabilities, misconfigurations, and high-risk interactions across their AI agent estate. The second, advanced in-line protection and posture controls, extends Model Armour's real-time security assurance to Agentspace prompts and responses. This enhancement is designed to provide controls against prompt injection, jailbreaking, and sensitive data leakage during agent interactions. In parallel, the introduction of specialised posture controls will help AI agents adhere to defined security policies and standards. Proactive threat detection rounds out these developments, introducing detections for risky behaviours and external threats to AI agents. These detections, supported by intelligence from Google and Mandiant, assist security teams in responding to anomalous and suspicious activity connected to AI agents. Agentic security operations centre Google Cloud is advancing its approach to security operations through an 'agentic SOC' vision in Google Security Operations, which leverages AI agents to enhance efficiency and detection capabilities. By automating processes such as data pipeline optimisation, alert triage, investigation, and response, Google Cloud aims to address traditional gaps in detection engineering workflows. "We've introduced our vision of an agentic security operations center (SOC) that includes a system where agents can coordinate their actions to accomplish a shared goal. By offering proactive, agent-supported defense capabilities built on optimizing data pipelines, automating alert triage, investigation, and response, the agentic SOC can streamline detection engineering workflows to address coverage gaps and create new threat-led detections." The new Alert Investigation agent, currently in preview, is capable of autonomously enriching events, analysing command-line interfaces, and building process trees. It produces recommendations for next steps and aims to reduce the manual effort and response times for security incidents. Expert guidance and consulting Google Cloud's Mandiant Consulting arm is extending its AI consulting services in response to demand for robust governance and security frameworks in AI deployments. These services address areas such as risk-based AI governance, pre-deployment environment hardening, and comprehensive threat modelling. Mandiant Consulting experts noted, "As more organizations lean into using generative and agentic AI, we've seen a growing need for AI security consulting. Mandiant Consulting experts often encounter customer concerns for robust governance frameworks, comprehensive threat modeling, and effective detection and response mechanisms for AI applications, underscoring the importance of understanding risk through adversarial testing." Clients working with Mandiant can access pre-deployment security assessments tailored to AI and benefit from continuous updates as threats evolve. Unified platform enhancements Google Unified Security, a platform integrating Google's security solutions, now features updates in Google Security Operations and Chrome Enterprise. Within Security Operations, the new SecOps Labs offers early access to AI-powered experiments related to parsing, detection, and response, many of which use Google Gemini technology. Dashboards with native security orchestration, automation, and response (SOAR) data integration are now generally available, reflecting user feedback from previous previews. On the endpoint side, Chrome Enterprise enhancements bring secured browsing to mobile, including Chrome on iOS, with features such as easy account separation and URL filtering. This allows companies to block access to unauthorised AI sites and provides enhanced reporting for investigation and compliance purposes. Trusted Cloud and compliance Recent updates in Trusted Cloud focus on compliance and data security. Compliance Manager, now in preview, enables unified policy configuration and extensive auditing within Google Cloud. Data Security Posture Management, also in preview, delivers governance for sensitive data and integrates natively with BigQuery Security Centre. The Security Command Centre's Risk Reports can now summarise unique cloud security risks to inform both security specialists and broader business stakeholders. Updates in identity management include Agentic IAM, launching later in the year, which will facilitate agent identities across environments to simplify credential management and authorisation for both human and non-human agents. Additionally, the IAM role picker powered by Gemini, currently in preview, assists administrators in granting least-privileged access through natural language queries. Enhanced Sensitive Data Protection now monitors assets in Vertex AI, BigQuery, and CloudSQL, with improvements in image inspection for sensitive data and additional context model detection. Network security innovations announced include expanded tag support for Cloud NGFW, Zero Trust networking for RDMA networks in preview, and new controls for Cloud Armour, such as hierarchical security policies and content-based WAF inspection updates. Commitment to responsible AI security Jon Ramsey emphasised Google Cloud's aim to make security a business enabler: "The innovations we're sharing today at Google Cloud Security Summit 2025 demonstrate our commitment to making security an enabler of your business ambitions. By automating compliance, simplifying access management, and expanding data protection for your AI workloads, we're helping you enhance your security posture with greater speed and ease. Further, by using AI to empower your defenders and meticulously securing your AI projects from inception to deployment, Google Cloud provides the comprehensive foundation you need to thrive in this new era."


Techday NZ
a day ago
- Techday NZ
AI bots drive 80% of bot traffic, straining web resources
Fastly has published its Q2 2025 Threat Insights Report, which documents considerable changes in the sources and impact of automated web traffic, highlighting the dominance of AI crawlers and the emergence of notable regional trends. AI crawler surge The report, covering activity from mid-April to mid-July 2025, identifies that AI crawlers now constitute almost 80% of all AI bot traffic. Meta is responsible for more than half of this figure, significantly surpassing Google and OpenAI in total AI crawling activity. According to Fastly, Meta bots generate 52% of observed AI crawler interactions, while Google and OpenAI represent 23% and 20% respectively. Fetcher bots, which access website content in response to user prompts - including those employed by ChatGPT and Perplexity - have led to exceptional real-time request rates. In some instances, fetcher request volumes have reached over 39,000 requests per minute. This phenomenon is noted as placing considerable strain on web infrastructure, increasing bandwidth usage, and overwhelming servers, a scenario that mirrors distributed denial-of-service attacks, though not motivated by malicious intent. Geographic concentration North America receives a disproportionate share of AI crawler traffic, accounting for almost 90% of such interactions, leaving a relatively minor portion for Europe, Asia, and Latin America. This imbalance raises concerns over the geographic bias in datasets used to train large language models, and whether this bias could shape the neutrality and fairness of AI-generated outputs in the future. The findings build on Fastly's Q1 2025 observations, which indicated automated bot activity represented 37% of network traffic. While volume was previously the chief concern, Fastly's latest data suggests that the current challenge lies in understanding the evolving complexity of bot-driven activity, particularly regarding AI-generated content scraping and high-frequency access patterns. Industry-wide implications Fastly's research, compiled from an analysis of 6.5 trillion monthly requests across its security solutions, presents a comprehensive overview of how AI bots are affecting a range of industries, including eCommerce, media and entertainment, financial services, and technology. Commerce, media, and high-tech sectors face the highest incidence of content scraping, which is largely undertaken for training AI models. ChatGPT in particular is cited as driving the most real-time website traffic among fetcher bots, accounting for 98% of related requests. Fastly also notes that a continuing lack of bot verification standards makes it difficult for security teams to distinguish between legitimate automation and attempts at impersonation. According to the report, this gap creates risks for operational resilience and poses challenges for detecting and managing unverified automation traffic. Verification and visibility "AI Bots are reshaping how the internet is accessed and experienced, introducing new complexities for digital platforms," said Arun Kumar, Senior Security Researcher at Fastly. "Whether scraping for training data or delivering real-time responses, these bots create new challenges for visibility, control, and cost. You can't secure what you can't see, and without clear verification standards, AI-driven automation risks are becoming a blind spot for digital teams. Businesses need the tools and insights to manage automated traffic with the same precision and urgency as any other infrastructure or security risk." The report recommends increased transparency in bot verification, more explicit identification by bot operators, and refined management strategies for handling automated traffic. In the absence of such measures, organisations may encounter rising levels of unaccounted-for automation, difficulties in attributing online activity, and escalating infrastructure expenses.


Techday NZ
a day ago
- Techday NZ
LambdaTest debuts AI tool platform for rapid validation
LambdaTest has announced the private beta launch of its Agent-to-Agent Testing platform, developed to validate and assess AI agents. The platform is targeting enterprises that increasingly deploy AI agents to support customer experiences and operations, as organisations seek reliable automated tools designed to handle the complex nature of AI-powered systems. Need for new testing approaches AI agents interact dynamically with both users and systems, resulting in unpredictability that challenges traditional software testing methods. Ensuring reliability and performance in these contexts has proven difficult, particularly as conventional testing tools fall short when the behaviour of AI systems cannot be easily anticipated in advance. LambdaTest's Agent-to-Agent Testing aims to address these challenges by using a multi-agent system that leverages large language models for rigorous evaluation. The platform is designed to facilitate the validation of areas such as conversation flows, intent recognition, tone consistency and complex reasoning in AI agents. Multi-modal analysis and broader coverage Teams using the platform can upload requirement documents in various formats, including text, images, audio, and video. The system performs multi-modal analysis to automatically generate test scenarios, aiming to simulate real-world circumstances that could pose challenges for the AI agent under test. Each generated scenario includes validation criteria and expected responses. These are evaluated within HyperExecute, LambdaTest's test orchestration cloud, which reportedly delivers up to 70% faster test execution when compared to standard automation grids. The platform also tracks metrics such as bias, completeness, and hallucinations, enabling teams to assess the overall quality of AI agent performance. Integration of agentic AI and GenAI Agent-to-Agent Testing incorporates both agentic AI and generative AI technologies to generate real-world scenarios, such as verification of personality tone in agents and data privacy considerations. The system executes these test cases with the goal of providing more diverse and extensive coverage compared to existing tools. Unlike single-agent systems, LambdaTest's approach employs multiple large language models. These support deeper reasoning and the generation of more comprehensive test suites, aiming for detailed validation of various AI application behaviours. "Every AI agent you deploy is unique, and that's both its greatest strength and its biggest risk! As AI applications become more complex, traditional testing approaches simply can't keep up with the dynamic nature of AI agents. Our Agent-to-Agent Testing platform thinks like a real user, generating smart, context-aware test scenarios that mimic real-world situations your AI might struggle with. Each test comes with clear validation checkpoints and the responses we'd expect to see," said Asad Khan, CEO and Co-Founder at LambdaTest. Impacts on testing speed and team resources LambdaTest says that businesses adopting Agent-to-Agent Testing will benefit from more rapid test creation, improved evaluation of AI agents, and decreased testing cycles. The company reports a five to ten-fold increase in test coverage through the platform's multi-agent system, providing a more detailed picture of how AI agents perform in practice. Integration with the HyperExecute system is designed to offer development teams fast feedback from test results, helping to reduce the interval between testing and product iteration. Automated processes also aim to reduce the reliance on manual quality assurance, with implications for cost efficiencies. The platform includes 15 different AI testing agents, covering areas such as security research and compliance validation. LambdaTest states that this is intended to ensure deployed AI agents meet requirements for robustness, security and reliability. The company's Agent-to-Agent Testing technology reflects ongoing efforts within the software testing sector to cope with the dynamic and evolving risks introduced by the increasing use of AI in business-critical systems.