
F5 2025 State Of Application Strategy Report Reveals Talk Becomes Action As AI Gets To Work
F5s 2025 State of Application Strategy Report, which surveys global IT decision makers, found that 96 per cent of organisations are now deploying AI models, up from a quarter in 2023.
F5 Report Highlights AI-Driven Transformation Amid Operational Complexity
96 per cent of surveyed IT decision-makers have deployed AI models, up from a quarter in 2023
SYDNEY, AUSTRALIA, May 8, 2025 – IT leaders are increasingly trusting AI with business-critical tasks from traffic management to cost optimisation, according to the industry's most comprehensive report on application strategy.
F5's 2025 State of Application Strategy Report, which surveys global IT decision makers, found that 96 per cent of organisations are now deploying AI models, up from a quarter in 2023.
There is also a growing willingness to elevate AI to the heart of business operations. Almost three-quarters of respondents (72 per cent) said they want to use AI to optimise app performance, whereas 59 per cent support the use of AI for both cost-optimisation and to inject security rules, automatically mitigating zero-day vulnerabilities.
Today, half of organisations are using AI gateways to connect applications to AI tools, and another 40 per cent expect to be doing so in the next 12 months. Most are using this technology to protect and manage AI models (62 per cent), provide a central point of control (55 per cent), and to protect their company from sensitive data leaks (55 per cent).
'This year's SOAS Report shows that IT decision makers are becoming confident about embedding AI into ops,' said Lori MacVittie, F5 Distinguished Engineer. 'We are fast moving to a point where AI will be trusted to operate autonomously at the heart of an organisation, generating and deploying code that helps to cut costs, boost efficiency, and mitigate security problems. That is what we mean when we talk about AIOps, and it is now becoming a reality.'
Operational Readiness and API Challenges Remain
Despite growing AI confidence, the SOAS Report highlights several enduring challenges. For organisations currently deploying AI models, the number one concern is AI model security.
And, while AI tools are more autonomous than ever, operational readiness gaps still exist. 60 per cent of organisations feel bogged down by manual workflows, and 54 per cent claim skill shortages are barriers to AI development.
Furthermore, almost half (48 per cent) identified the cost of building and operating AI workloads as a problem, up from 42 per cent last year.
A greater proportion of organisations also said that they have not established a scalable data practice (39 per cent vs. 33 per cent in 2024) and that they do not trust AI outputs due to potential bias or hallucinations (34 per cent vs. 27 per cent). However, fewer complained about the quality of their data (48 per cent, down from 56 per cent last year).
APIs were another concern. 58 per cent reported they have become a pain point, and some organisations spend as much as half of their time managing complex configurations involving numerous APIs and languages. Working with vendor APIs (31 per cent), custom scripting (29 per cent), and integrating with ticketing and management systems (23 per cent) were flagged as the most time-consuming automation-related tasks.
'Organisations need to focus on the simplification and standardisation of operations, including streamlining APIs, technologies, and tasks,' said MacVittie. 'They should also recognise that AI systems are themselves well-suited to handle complexity autonomously by generating and deploying policies or solving workflow issues. Operational simplicity is not just something on which AI is going to rely, but which it will itself help to deliver.'
Hybrid App Deployments Prevail
Allied to soaring AI appetites is a greater reliance on hybrid cloud architectures.
According to the SOAS Report, 94 per cent of organisations are deploying applications across multiple environments – including public clouds, private clouds, on-premises data centres, edge computing, and colocation facilities – to meet varied scalability, cost, and compliance requirements.
Consequently, most decision makers see hybrid environments as critical to their operational flexibility. 91 per cent cited adaptability to fluctuating business needs as the top benefit of adopting multiple clouds, followed by improved app resiliency (68 per cent) and cost efficiencies (59 per cent).
A hybrid approach is also reflected in deployment strategies for AI workloads, with 51 per cent planning to use models across both cloud and on-premises environments for the foreseeable future.
Significantly, 79 per cent of organisations recently repatriated at least one application from the public cloud back to an on-premises or colocation environment, citing cost control, security concerns, and predictability. This marks a dramatic rise from 13 per cent just four years ago, further underscoring the importance of preserving flexibility beyond public cloud reliance.
Still, the hybrid model can prove a headache for some. Inconsistent delivery policies (reported by 53 per cent of respondents) and fragmented security strategies (47 per cent) are all top of mind in this respect.
'While spreading applications across different environments and cloud providers can bring challenges, the benefits of being cloud-agnostic are too great to ignore. It has never been clearer that the hybrid approach to app deployment is here to stay,' said Cindy Borovick, Director of Market and Competitive Intelligence, F5.
APCJ AI Adoption and Challenges – Key Highlights:
AI Gateways on the Rise: Nearly half of APCJ organisations (49 per cent) are already using AI gateways to connect applications to AI tools, with another 46 per cent planning to do so in the next 12 months.
Top Use Cases for AI Gateways: Among those leveraging AI gateways, the most common applications include protecting and managing AI models (66 per cent), preventing sensitive data leaks (61 per cent), and observing AI traffic and application demand (61 per cent).
Data and Trust Challenges: Over half (53 per cent) struggle with immature data quality, and 45 per cent are deterred by the high costs of building and running AI workloads.
Hybrid Complexity: The hybrid model of AI deployment introduces hurdles, with 79 per cent citing inconsistent security policies, 59 per cent highlighting delivery inconsistencies, and 16 per cent dealing with operational difficulties.
Toward a Programmable, AI-Driven Future
Looking ahead, the SOAS Report suggests that organisations aiming to unlock AI's full potential should focus on creating programmable IT environments that standardise and automate app delivery and security policies.
By 2026, AI is expected to move from isolated tasks to orchestrating end-to-end processes, marking a shift toward complete automation within IT operations environments. Platforms equipped with natural language interfaces and programmable capabilities will increasingly eliminate the need for traditional management consoles, streamlining IT workflows with unprecedented precision.
'Flexibility and automation are no longer optional—they are critical for navigating complexity and driving transformation at scale,' Borovick emphasised. 'Organisations that establish programmable foundations will not only enhance AI's potential but create IT strategies capable of scaling, adapting, and delivering exceptional customer experiences in the modern age.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
11 hours ago
- Techday NZ
AI bots drive 80% of bot traffic, straining web resources
Fastly has published its Q2 2025 Threat Insights Report, which documents considerable changes in the sources and impact of automated web traffic, highlighting the dominance of AI crawlers and the emergence of notable regional trends. AI crawler surge The report, covering activity from mid-April to mid-July 2025, identifies that AI crawlers now constitute almost 80% of all AI bot traffic. Meta is responsible for more than half of this figure, significantly surpassing Google and OpenAI in total AI crawling activity. According to Fastly, Meta bots generate 52% of observed AI crawler interactions, while Google and OpenAI represent 23% and 20% respectively. Fetcher bots, which access website content in response to user prompts - including those employed by ChatGPT and Perplexity - have led to exceptional real-time request rates. In some instances, fetcher request volumes have reached over 39,000 requests per minute. This phenomenon is noted as placing considerable strain on web infrastructure, increasing bandwidth usage, and overwhelming servers, a scenario that mirrors distributed denial-of-service attacks, though not motivated by malicious intent. Geographic concentration North America receives a disproportionate share of AI crawler traffic, accounting for almost 90% of such interactions, leaving a relatively minor portion for Europe, Asia, and Latin America. This imbalance raises concerns over the geographic bias in datasets used to train large language models, and whether this bias could shape the neutrality and fairness of AI-generated outputs in the future. The findings build on Fastly's Q1 2025 observations, which indicated automated bot activity represented 37% of network traffic. While volume was previously the chief concern, Fastly's latest data suggests that the current challenge lies in understanding the evolving complexity of bot-driven activity, particularly regarding AI-generated content scraping and high-frequency access patterns. Industry-wide implications Fastly's research, compiled from an analysis of 6.5 trillion monthly requests across its security solutions, presents a comprehensive overview of how AI bots are affecting a range of industries, including eCommerce, media and entertainment, financial services, and technology. Commerce, media, and high-tech sectors face the highest incidence of content scraping, which is largely undertaken for training AI models. ChatGPT in particular is cited as driving the most real-time website traffic among fetcher bots, accounting for 98% of related requests. Fastly also notes that a continuing lack of bot verification standards makes it difficult for security teams to distinguish between legitimate automation and attempts at impersonation. According to the report, this gap creates risks for operational resilience and poses challenges for detecting and managing unverified automation traffic. Verification and visibility "AI Bots are reshaping how the internet is accessed and experienced, introducing new complexities for digital platforms," said Arun Kumar, Senior Security Researcher at Fastly. "Whether scraping for training data or delivering real-time responses, these bots create new challenges for visibility, control, and cost. You can't secure what you can't see, and without clear verification standards, AI-driven automation risks are becoming a blind spot for digital teams. Businesses need the tools and insights to manage automated traffic with the same precision and urgency as any other infrastructure or security risk." The report recommends increased transparency in bot verification, more explicit identification by bot operators, and refined management strategies for handling automated traffic. In the absence of such measures, organisations may encounter rising levels of unaccounted-for automation, difficulties in attributing online activity, and escalating infrastructure expenses.


Techday NZ
a day ago
- Techday NZ
LambdaTest debuts AI tool platform for rapid validation
LambdaTest has announced the private beta launch of its Agent-to-Agent Testing platform, developed to validate and assess AI agents. The platform is targeting enterprises that increasingly deploy AI agents to support customer experiences and operations, as organisations seek reliable automated tools designed to handle the complex nature of AI-powered systems. Need for new testing approaches AI agents interact dynamically with both users and systems, resulting in unpredictability that challenges traditional software testing methods. Ensuring reliability and performance in these contexts has proven difficult, particularly as conventional testing tools fall short when the behaviour of AI systems cannot be easily anticipated in advance. LambdaTest's Agent-to-Agent Testing aims to address these challenges by using a multi-agent system that leverages large language models for rigorous evaluation. The platform is designed to facilitate the validation of areas such as conversation flows, intent recognition, tone consistency and complex reasoning in AI agents. Multi-modal analysis and broader coverage Teams using the platform can upload requirement documents in various formats, including text, images, audio, and video. The system performs multi-modal analysis to automatically generate test scenarios, aiming to simulate real-world circumstances that could pose challenges for the AI agent under test. Each generated scenario includes validation criteria and expected responses. These are evaluated within HyperExecute, LambdaTest's test orchestration cloud, which reportedly delivers up to 70% faster test execution when compared to standard automation grids. The platform also tracks metrics such as bias, completeness, and hallucinations, enabling teams to assess the overall quality of AI agent performance. Integration of agentic AI and GenAI Agent-to-Agent Testing incorporates both agentic AI and generative AI technologies to generate real-world scenarios, such as verification of personality tone in agents and data privacy considerations. The system executes these test cases with the goal of providing more diverse and extensive coverage compared to existing tools. Unlike single-agent systems, LambdaTest's approach employs multiple large language models. These support deeper reasoning and the generation of more comprehensive test suites, aiming for detailed validation of various AI application behaviours. "Every AI agent you deploy is unique, and that's both its greatest strength and its biggest risk! As AI applications become more complex, traditional testing approaches simply can't keep up with the dynamic nature of AI agents. Our Agent-to-Agent Testing platform thinks like a real user, generating smart, context-aware test scenarios that mimic real-world situations your AI might struggle with. Each test comes with clear validation checkpoints and the responses we'd expect to see," said Asad Khan, CEO and Co-Founder at LambdaTest. Impacts on testing speed and team resources LambdaTest says that businesses adopting Agent-to-Agent Testing will benefit from more rapid test creation, improved evaluation of AI agents, and decreased testing cycles. The company reports a five to ten-fold increase in test coverage through the platform's multi-agent system, providing a more detailed picture of how AI agents perform in practice. Integration with the HyperExecute system is designed to offer development teams fast feedback from test results, helping to reduce the interval between testing and product iteration. Automated processes also aim to reduce the reliance on manual quality assurance, with implications for cost efficiencies. The platform includes 15 different AI testing agents, covering areas such as security research and compliance validation. LambdaTest states that this is intended to ensure deployed AI agents meet requirements for robustness, security and reliability. The company's Agent-to-Agent Testing technology reflects ongoing efforts within the software testing sector to cope with the dynamic and evolving risks introduced by the increasing use of AI in business-critical systems.

RNZ News
a day ago
- RNZ News
Book Critic: The implications of AI on writing
Pip Adam has been reading and thinking a lot about how AI could affect writing. She shares books that offer insights to AI and that show how important human writers are. Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI - Karen Hao Blame it on the Rain (no more poetry 2025) - Hana Pera Aoake Show you're working out - Liz Breslin Hana Pera Aoake Photo: Supplied