
Palo Alto Networks to acquire Protect AI to boost AI security
The acquisition agreement enables Palo Alto Networks to enhance the security of the entire AI lifecycle for its customers, from development through to runtime. The company stated that large enterprises and government organisations are increasingly developing complex ecosystems that use AI models, tools, APIs, and third-party components, which introduce new classes of risk not typically covered by traditional security tools.
Threat actors have begun exploiting these new vulnerabilities, including methods such as model manipulation, data poisoning, and prompt injection attacks. These developments have underscored the need for solutions designed specifically to mitigate risks within AI systems. Palo Alto Networks has previously invested in capabilities for securing AI, and the integration of Protect AI aims to further expand these capabilities across both existing and emerging threat landscapes.
Protect AI's solutions and team are expected to help accelerate Palo Alto Networks' plans for Prisma AIRSTM, a security platform for AI models also announced as part of the transaction. The Prisma AIRS platform is designed to offer enterprises and other organisations protection across the whole AI development process, including model scanning, risk assessments, runtime security for generative AI, posture management, and AI agent security.
Anand Oswal, Senior Vice President and General Manager at Palo Alto Networks, said, "As AI-powered applications become core to businesses, they bring risks traditional security tools can't adequately handle. By extending our AI security capabilities to include Protect AI's innovative solutions for Securing for AI, businesses will be able to build AI applications with comprehensive security. With the addition of Protect AI's existing portfolio of solutions and team of experts, Palo Alto Networks will be well-positioned to offer a wide range of solutions for customers' current needs, and also be able to continue innovating on delivering new solutions that are needed for this dynamic threat landscape."
Ian Swanson, Co-Founder and Chief Executive Officer of Protect AI, commented, "Joining forces with Palo Alto Networks will enable us to scale our mission of making the AI landscape more secure for users and organizations of all sizes. We are excited for the opportunity to unite with a company that shares our vision and brings the operational scale and cybersecurity prowess to amplify our impact globally."
On completion of the transaction, Protect AI's Chief Executive Officer, its founding team and other employees will join Palo Alto Networks. The acquisition is expected to close by Palo Alto Networks' first quarter of fiscal 2026, subject to customary closing conditions and regulatory approvals.
Palo Alto Networks indicated that the rapid adoption of AI across sectors, and the evolving threat vectors targeting these deployments, necessitates substantial investments in secure architecture and tailored risk mitigation. The acquisition of Protect AI is positioned as a move to enable organisations to pursue AI-driven projects with increased security and assurance.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
2 hours ago
- Techday NZ
Google Cloud unveils advanced AI security tools & SOC updates
Google Cloud has announced new security solutions and enhanced capabilities focused on securing AI initiatives and supporting defenders in the context of growing enterprise adoption of artificial intelligence technologies. With the introduction of AI across various sectors, organisations are increasingly concerned with the risks presented by sophisticated AI agents. Google Cloud has responded by expanding on the security measures available within its Security Command Centre, emphasising protection for AI agents and ecosystems using tools such as Sensitive Data Protection and Model Armour. According to Jon Ramsey, Vice President and General Manager, Google Cloud Security, "AI presents an unprecedented opportunity for organizations to redefine their security posture and reduce the greatest amount of risk for the investment. From proactively finding zero-day vulnerabilities to processing vast amounts of threat intelligence data in seconds to freeing security teams from toilsome work, AI empowers security teams to achieve not seen before levels of defence and efficiency." Expanded protection for agentic AI Google Cloud has detailed three new capabilities for securing AI agents in Google Agentspace and Google Agent Builder. The first, expanded AI agent inventory and risk identification, will enable automated discovery of AI agents and Model Context Protocol (MCP) servers. This feature aims to help security teams quickly identify vulnerabilities, misconfigurations, and high-risk interactions across their AI agent estate. The second, advanced in-line protection and posture controls, extends Model Armour's real-time security assurance to Agentspace prompts and responses. This enhancement is designed to provide controls against prompt injection, jailbreaking, and sensitive data leakage during agent interactions. In parallel, the introduction of specialised posture controls will help AI agents adhere to defined security policies and standards. Proactive threat detection rounds out these developments, introducing detections for risky behaviours and external threats to AI agents. These detections, supported by intelligence from Google and Mandiant, assist security teams in responding to anomalous and suspicious activity connected to AI agents. Agentic security operations centre Google Cloud is advancing its approach to security operations through an 'agentic SOC' vision in Google Security Operations, which leverages AI agents to enhance efficiency and detection capabilities. By automating processes such as data pipeline optimisation, alert triage, investigation, and response, Google Cloud aims to address traditional gaps in detection engineering workflows. "We've introduced our vision of an agentic security operations center (SOC) that includes a system where agents can coordinate their actions to accomplish a shared goal. By offering proactive, agent-supported defense capabilities built on optimizing data pipelines, automating alert triage, investigation, and response, the agentic SOC can streamline detection engineering workflows to address coverage gaps and create new threat-led detections." The new Alert Investigation agent, currently in preview, is capable of autonomously enriching events, analysing command-line interfaces, and building process trees. It produces recommendations for next steps and aims to reduce the manual effort and response times for security incidents. Expert guidance and consulting Google Cloud's Mandiant Consulting arm is extending its AI consulting services in response to demand for robust governance and security frameworks in AI deployments. These services address areas such as risk-based AI governance, pre-deployment environment hardening, and comprehensive threat modelling. Mandiant Consulting experts noted, "As more organizations lean into using generative and agentic AI, we've seen a growing need for AI security consulting. Mandiant Consulting experts often encounter customer concerns for robust governance frameworks, comprehensive threat modeling, and effective detection and response mechanisms for AI applications, underscoring the importance of understanding risk through adversarial testing." Clients working with Mandiant can access pre-deployment security assessments tailored to AI and benefit from continuous updates as threats evolve. Unified platform enhancements Google Unified Security, a platform integrating Google's security solutions, now features updates in Google Security Operations and Chrome Enterprise. Within Security Operations, the new SecOps Labs offers early access to AI-powered experiments related to parsing, detection, and response, many of which use Google Gemini technology. Dashboards with native security orchestration, automation, and response (SOAR) data integration are now generally available, reflecting user feedback from previous previews. On the endpoint side, Chrome Enterprise enhancements bring secured browsing to mobile, including Chrome on iOS, with features such as easy account separation and URL filtering. This allows companies to block access to unauthorised AI sites and provides enhanced reporting for investigation and compliance purposes. Trusted Cloud and compliance Recent updates in Trusted Cloud focus on compliance and data security. Compliance Manager, now in preview, enables unified policy configuration and extensive auditing within Google Cloud. Data Security Posture Management, also in preview, delivers governance for sensitive data and integrates natively with BigQuery Security Centre. The Security Command Centre's Risk Reports can now summarise unique cloud security risks to inform both security specialists and broader business stakeholders. Updates in identity management include Agentic IAM, launching later in the year, which will facilitate agent identities across environments to simplify credential management and authorisation for both human and non-human agents. Additionally, the IAM role picker powered by Gemini, currently in preview, assists administrators in granting least-privileged access through natural language queries. Enhanced Sensitive Data Protection now monitors assets in Vertex AI, BigQuery, and CloudSQL, with improvements in image inspection for sensitive data and additional context model detection. Network security innovations announced include expanded tag support for Cloud NGFW, Zero Trust networking for RDMA networks in preview, and new controls for Cloud Armour, such as hierarchical security policies and content-based WAF inspection updates. Commitment to responsible AI security Jon Ramsey emphasised Google Cloud's aim to make security a business enabler: "The innovations we're sharing today at Google Cloud Security Summit 2025 demonstrate our commitment to making security an enabler of your business ambitions. By automating compliance, simplifying access management, and expanding data protection for your AI workloads, we're helping you enhance your security posture with greater speed and ease. Further, by using AI to empower your defenders and meticulously securing your AI projects from inception to deployment, Google Cloud provides the comprehensive foundation you need to thrive in this new era."


Techday NZ
a day ago
- Techday NZ
AI bots drive 80% of bot traffic, straining web resources
Fastly has published its Q2 2025 Threat Insights Report, which documents considerable changes in the sources and impact of automated web traffic, highlighting the dominance of AI crawlers and the emergence of notable regional trends. AI crawler surge The report, covering activity from mid-April to mid-July 2025, identifies that AI crawlers now constitute almost 80% of all AI bot traffic. Meta is responsible for more than half of this figure, significantly surpassing Google and OpenAI in total AI crawling activity. According to Fastly, Meta bots generate 52% of observed AI crawler interactions, while Google and OpenAI represent 23% and 20% respectively. Fetcher bots, which access website content in response to user prompts - including those employed by ChatGPT and Perplexity - have led to exceptional real-time request rates. In some instances, fetcher request volumes have reached over 39,000 requests per minute. This phenomenon is noted as placing considerable strain on web infrastructure, increasing bandwidth usage, and overwhelming servers, a scenario that mirrors distributed denial-of-service attacks, though not motivated by malicious intent. Geographic concentration North America receives a disproportionate share of AI crawler traffic, accounting for almost 90% of such interactions, leaving a relatively minor portion for Europe, Asia, and Latin America. This imbalance raises concerns over the geographic bias in datasets used to train large language models, and whether this bias could shape the neutrality and fairness of AI-generated outputs in the future. The findings build on Fastly's Q1 2025 observations, which indicated automated bot activity represented 37% of network traffic. While volume was previously the chief concern, Fastly's latest data suggests that the current challenge lies in understanding the evolving complexity of bot-driven activity, particularly regarding AI-generated content scraping and high-frequency access patterns. Industry-wide implications Fastly's research, compiled from an analysis of 6.5 trillion monthly requests across its security solutions, presents a comprehensive overview of how AI bots are affecting a range of industries, including eCommerce, media and entertainment, financial services, and technology. Commerce, media, and high-tech sectors face the highest incidence of content scraping, which is largely undertaken for training AI models. ChatGPT in particular is cited as driving the most real-time website traffic among fetcher bots, accounting for 98% of related requests. Fastly also notes that a continuing lack of bot verification standards makes it difficult for security teams to distinguish between legitimate automation and attempts at impersonation. According to the report, this gap creates risks for operational resilience and poses challenges for detecting and managing unverified automation traffic. Verification and visibility "AI Bots are reshaping how the internet is accessed and experienced, introducing new complexities for digital platforms," said Arun Kumar, Senior Security Researcher at Fastly. "Whether scraping for training data or delivering real-time responses, these bots create new challenges for visibility, control, and cost. You can't secure what you can't see, and without clear verification standards, AI-driven automation risks are becoming a blind spot for digital teams. Businesses need the tools and insights to manage automated traffic with the same precision and urgency as any other infrastructure or security risk." The report recommends increased transparency in bot verification, more explicit identification by bot operators, and refined management strategies for handling automated traffic. In the absence of such measures, organisations may encounter rising levels of unaccounted-for automation, difficulties in attributing online activity, and escalating infrastructure expenses.


Techday NZ
2 days ago
- Techday NZ
LambdaTest debuts AI tool platform for rapid validation
LambdaTest has announced the private beta launch of its Agent-to-Agent Testing platform, developed to validate and assess AI agents. The platform is targeting enterprises that increasingly deploy AI agents to support customer experiences and operations, as organisations seek reliable automated tools designed to handle the complex nature of AI-powered systems. Need for new testing approaches AI agents interact dynamically with both users and systems, resulting in unpredictability that challenges traditional software testing methods. Ensuring reliability and performance in these contexts has proven difficult, particularly as conventional testing tools fall short when the behaviour of AI systems cannot be easily anticipated in advance. LambdaTest's Agent-to-Agent Testing aims to address these challenges by using a multi-agent system that leverages large language models for rigorous evaluation. The platform is designed to facilitate the validation of areas such as conversation flows, intent recognition, tone consistency and complex reasoning in AI agents. Multi-modal analysis and broader coverage Teams using the platform can upload requirement documents in various formats, including text, images, audio, and video. The system performs multi-modal analysis to automatically generate test scenarios, aiming to simulate real-world circumstances that could pose challenges for the AI agent under test. Each generated scenario includes validation criteria and expected responses. These are evaluated within HyperExecute, LambdaTest's test orchestration cloud, which reportedly delivers up to 70% faster test execution when compared to standard automation grids. The platform also tracks metrics such as bias, completeness, and hallucinations, enabling teams to assess the overall quality of AI agent performance. Integration of agentic AI and GenAI Agent-to-Agent Testing incorporates both agentic AI and generative AI technologies to generate real-world scenarios, such as verification of personality tone in agents and data privacy considerations. The system executes these test cases with the goal of providing more diverse and extensive coverage compared to existing tools. Unlike single-agent systems, LambdaTest's approach employs multiple large language models. These support deeper reasoning and the generation of more comprehensive test suites, aiming for detailed validation of various AI application behaviours. "Every AI agent you deploy is unique, and that's both its greatest strength and its biggest risk! As AI applications become more complex, traditional testing approaches simply can't keep up with the dynamic nature of AI agents. Our Agent-to-Agent Testing platform thinks like a real user, generating smart, context-aware test scenarios that mimic real-world situations your AI might struggle with. Each test comes with clear validation checkpoints and the responses we'd expect to see," said Asad Khan, CEO and Co-Founder at LambdaTest. Impacts on testing speed and team resources LambdaTest says that businesses adopting Agent-to-Agent Testing will benefit from more rapid test creation, improved evaluation of AI agents, and decreased testing cycles. The company reports a five to ten-fold increase in test coverage through the platform's multi-agent system, providing a more detailed picture of how AI agents perform in practice. Integration with the HyperExecute system is designed to offer development teams fast feedback from test results, helping to reduce the interval between testing and product iteration. Automated processes also aim to reduce the reliance on manual quality assurance, with implications for cost efficiencies. The platform includes 15 different AI testing agents, covering areas such as security research and compliance validation. LambdaTest states that this is intended to ensure deployed AI agents meet requirements for robustness, security and reliability. The company's Agent-to-Agent Testing technology reflects ongoing efforts within the software testing sector to cope with the dynamic and evolving risks introduced by the increasing use of AI in business-critical systems.