
AI adoption in cloud security rises as firms seek data protection
The company's State of Cloud Security Report indicates a growing reliance on AI-driven technologies as organisations seek to address vulnerabilities and fill gaps left by traditional security systems.
Adoption rates
The report found that 79% of surveyed cloud security teams are now using AI technologies to monitor and manage their environments. This figure suggests a widespread move towards automated systems, with organisations seeking solutions to keep pace with expanding cloud infrastructure and increasing demands on compliance.
The adoption of AI in cloud security is being propelled by several factors, notably the ongoing cybersecurity skills gap. Data referenced in the report from an external 2024 ISC2 Cybersecurity Workforce Study estimates a global shortfall of 4.8 million cybersecurity professionals.
Nearly half of respondents using AI (44%) cited human augmentation, or the ability to support and improve analyst decision-making, as a primary benefit. Improved data protection was noted by 42% of respondents, and 38% reported enhancements in threat detection and response capabilities using AI systems.
Challenges and trust issues
Despite high adoption rates, the report highlights that only 42% of cloud teams saw actual improvements in data protection, and just 38% observed real progress in threat detection, suggesting that the benefits of AI are being realised unevenly across organisations. The figures indicate that while there is optimism regarding AI, measurable improvements are still in the process of being confirmed.
Prowler's findings also reveal that confidence in AI is tempered by a cautious outlook on its future impact. Only 27% of professionals believe that AI and machine learning-driven analytics will have the most significant effect on cloud security over the next three years. The survey suggests that this cautious optimism is linked to the early stage of adoption and organisations' need for clearer, evidence-based outcomes from AI-based tools. "The pace at which cloud infrastructure is expanding is limiting the ability of many security teams to keep up. That's not just a challenge, it's a significant risk. The data shows a clear evolution and highlights the positives when adopting AI into how cloud security is being approached," said Toni de la Fuente, founder & CEO of Prowler. "Security professionals are not just using AI, they're relying on it to extend the reach and accuracy of their teams. As cloud environments grow more fragmented and real-time visibility becomes harder to maintain, AI's role in augmenting human effort and improving response times has become pivotal."
According to the report, the primary uses for AI within cloud security teams relate to the scale and complexity of modern cloud environments, which are often fragmented and require real-time monitoring that traditional systems struggle to deliver. AI tools are being used to automate monitoring, manage compliance requirements, and provide what many professionals describe as 'human augmentation' for overstretched teams.
Frameworks and transparency
The report concludes that, as AI technology matures, its position as a fundamental element of the security stack is being solidified across diverse sectors with substantial cloud footprints. The analysis also suggests that organisations will only fully realise the benefits of automation and advanced analytics when integrating AI with transparent and adaptable security frameworks tailored for hybrid and multi-cloud systems.
Survey data in the report suggests that ongoing challenges for AI adoption include building greater trust through demonstrating real-world efficiencies, resilience and responsiveness. Only then are organisations likely to report higher confidence in AI's influence on core cloud security operations.
Follow us on:
Share on:
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Otago Daily Times
an hour ago
- Otago Daily Times
Concerns raised over AI scribe tools for doctors
Major concerns have been raised about the security of private patient information in New Zealand after new research found 40% of the doctors surveyed are using AI scribes to take patient notes. Lead researcher and University of Otago (Wellington) primary healthcare and general practice bioethicist Professor Angela Ballantyne said AI was being rapidly taken up by primary care practices to transcribe patient notes during consultations, despite ongoing challenges with their legal and ethical oversight, data security, patient consent, and the impact on the doctor-patient relationship. "Most AI-scribes rely on international cloud-based platforms — often privately owned and controlled — for processing and storing data, which raises questions about where data is stored, who has access to it, and how it can be protected from cyber threats. "There are also Aotearoa-specific data governance issues that need to be recognised and resolved — particularly around Māori data sovereignty." Prof Ballantyne said 197 health providers working in primary care were surveyed in February and March 2024, providing a snapshot in time of the use of AI-scribes in clinical practice. Most of the respondents were GPs, but others included nurses, nurse practitioners, rural emergency care providers and practice managers. Of those surveyed, 40% reported using AI-scribes to take patient notes. Only 66% had read the terms and conditions on the use of the software, and 59% reported seeking patient consent, she said. Most of those surveyed who used AI scribes found them helpful, and 47% estimated it saved them between 30 minutes and two hours a day by using it in every consultation. However, "a significant minority" said the software did not save time overall because it took so long to edit and correct AI-generated notes, she said. "[Doctors] need to be vigilant about checking patient notes for accuracy. "However, as many survey respondents noted, carefully checking each AI-generated clinical note eats into, and sometimes negates any time savings." Many had concerns about the accuracy, completeness and conciseness of the patient notes produced by AI-scribes, and some were concerned about its inability to understand New Zealand accents or vocabulary and te reo Māori. Others using an AI scribe felt it enabled them to focus more on their patients and build better engagement and rapport through more eye contact and active listening. But there was concern among those surveyed about whether the use of an AI-scribe complied with New Zealand's ethical and legal frameworks. Prof Ballantyne said it could not be assumed that patients consented to their use. "Patients should be given the right to opt out of the use of AI and still access care, and adequate training and guidelines must be put in place for health providers." In July, the National Artificial Intelligence and Algorithm Expert Advisory Group at Health New Zealand Te Whatu Ora endorsed two ambient AI scribe tools — Heidi Health and iMedX, for use by its clinicians in New Zealand. Prof Ballantyne said the Medical Council of New Zealand was expected to release guidance about the use of AI in health later this year, which was likely to require patients to give consent to the use of AI transcription tools. And there was still a need to track and evaluate the impact of AI tools on clinical practice and patient interactions, she said.


Techday NZ
8 hours ago
- Techday NZ
Forter expands tools for retailers to track agentic AI risks
Forter has announced a product expansion aimed at helping retailers manage the rising prevalence of agentic artificial intelligence (AI) in digital commerce. The company has introduced new capabilities to monitor both agent and human identities and behaviours, enabling businesses to address fraud and risk as the use of agentic AI increases in the marketplace. According to Forter, agentic traffic spiked by 18,510% across its network shortly after the release of ChatGPT Agent, highlighting major changes in how automated bots and AI assistants engage with online retailers. AI agent detection The new release from Forter incorporates advanced AI agent detection tools and browsing identification, a dashboard for real-time monitoring, and industry-wide insights based on its global eCommerce network. Forter states that these tools will give merchants increased visibility into AI-driven activity on their platforms, providing actionable data to help them approach an evolving eCommerce environment securely. Gartner and other research firms forecast that AI agents will be responsible for 20% of digital commerce traffic in the next five years. In light of this, Forter has positioned its new tools as essential for merchants seeking to understand and leverage the emerging opportunities – and challenges – posed by the rapid advent of agentic commerce. "AI-powered commerce is the next frontier," said Forter CEO Michael Reitblat. "Merchants that embrace agentic AI will elevate customer experience and unlock new opportunities - those that don't will fall behind." The expansion arrives at a time when the lines between customer and bot interactions are increasingly blurred. As AI begins to play a more prominent role in commerce, product research, and purchase transactions, Forter notes that legacy approaches to digital engagement, identity management, and fraud prevention are no longer sufficient. Rise in fraudulent activity In its network, Forter has observed a 50% increase in fraudulent activity employing scripted and automated methods designed to rapidly alter identities and manipulate images. The company reports that another growing area of concern is synthetic identity fraud, in which multiple data points, brought together by AI, create false yet convincing identities that can mimic either human or agentic consumers. Discussing the opportunity and challenges presented by AI agents, Cyndy Lobb, Chief Product Officer of Forter, said, "Agentic AI presents an enormous growth opportunity for merchants. What we hear from our customers and the market-at-large is control against fraud and risk in this AI-era is critical to unlocking this opportunity. Our commitment is to give merchants and commerce organizations of all types the confidence to experiment and scale with agents and agent developers." Capabilities outlined According to Forter, the new features include models that differentiate between types of AI agents, tools for detecting AI-assisted shopping behaviour, a real-time Agentic Dashboard, and anonymised insights published at industry scale. Planned future capabilities will support automation of policy creation, further insight into consumer trends, specialised tokenisation for agentic transactions, and deeper integration with other eCommerce systems. Michael Reitblat commented on the company's history and commitment in AI, stating, "Since our founding more than a decade ago, Forter has been at the forefront of AI, building to unlock opportunities for merchants. We were the first to apply neural networks to make real-time identity and fraud decisions possible, co-developed bot identification technology being used by the world's largest brands, the first to recognize the threat and opportunity for enterprise to manage bot access rather than just block it, and now we are at the forefront of agentic commerce while continuing to invest in fraud prevention, payments optimization, and several new areas." Industry perspectives Jenna Flateman Posner, CEO and Founder of Chief Digital Agency and an advisor to Forter, added her perspective on the importance of distinguishing AI activity from legitimate customer behaviour. She said, "As a former retail executive, longtime Forter client and now advisor to the business, I've watched Forter consistently stay ahead of the curve. With AI agents so quickly saturating traffic, getting ahead of determining bad actors vs qualified bot transactions is going to be essential. Retailers need tools that can distinguish between human and agentic behavior without sacrificing customer trust. Forter's agentic AI strategy is exactly what the industry needs to embrace the future of AI-powered commerce confidently and securely." Forter's latest product capabilities are now available to its customers. The company continues to monitor developments in AI-driven commerce and fraud as it develops additional products and integrations.


Techday NZ
8 hours ago
- Techday NZ
Teleport launches Secure MCP to protect AI enterprise workflows
Teleport has announced the general availability of its Secure Model Context Protocol (MCP) for use on the Teleport Infrastructure Identity Platform. The Secure MCP solution seeks to address new security challenges emerging from the rapid adoption of artificial intelligence across enterprises. Recent data from Enterprise Strategy Group indicate that 44% of enterprises have now deployed AI within their organisations. Teleport's Secure MCP is designed to provide security guardrails for AI systems as they interact with databases, MCP servers, and other forms of enterprise data. The Model Context Protocol is an open standard that enables AI models to connect with various tools, databases, or applications using a simplified, universal interface. This is intended to streamline integration in a manner akin to technology standards such as USB-C for physical devices. Despite these integration benefits, MCP was not originally intended with access control, which presents risks around unrestricted data access for AI models. Consequently, there is a need for mechanisms that can provide controlled, audited, and secure access to sensitive data. Teleport's Secure MCP responds to these needs by employing its Infrastructure Identity Platform, which extends existing trust frameworks to AI-based workflows. The platform enforces both Role-Based and Attribute-Based Access Controls (RBAC and ABAC) to manage the resources that large language models (LLMs) can access. Every session involving AI data access is logged, thereby contributing to regulatory compliance and audit readiness. Ev Kontsevoy, Chief Executive Officer of Teleport, commented on the development: "AI is terraforming how software is deployed in organizations. It shouldn't require a major public security incident to motivate business leaders to prepare for this impending challenge. Applying the same access control guardrails for AI, humans, and non-human identities accelerates AI adoption while locking in the protection needed to prevent unauthorized access of data. That's why we launched our secure MCP solution for Teleport, to enable enterprises to confidently unlock AI's innovation without falling prey to its security vulnerabilities and loopholes." Industry analysts have noted a concurrent rise in deployments of AI agents that operate within core enterprise systems, increasing the urgency for businesses to address identity and data security concerns. Todd Thiemann, Principal Analyst for Identity Security & Data Security at Enterprise Strategy Group, highlighted the pressing nature of these issues: "A wave of AI agent deployments that touch on core enterprise systems is in process, and identity teams need to be prepared. Recent Enterprise Strategy Group research showed that data privacy and security for AI agents were major concerns for enterprise security teams. Teleport's Secure MCP solution lays the groundwork for secure agent deployment and enables identity teams to get ahead of the game in securing their AI agent deployments." Secure MCP delivers several key architectural components for AI and MCP deployments. These include Zero Trust Networking, allowing only authenticated clients to interact with MCP servers over encrypted connections. A live MCP server inventory feature allows administrators to discover and register MCP tools across hybrid infrastructure environments automatically. Strict access control ensures that language models are only able to access resources for which they are specifically authorised, while the principle of least privilege means that authorisations are granted on a just-in-time basis for defined tasks. This minimises the potential risk of overprivileged or persistent access by AI models. Additionally, comprehensive audit trails provide a record of every attempt - successful or denied - by LLMs to access data. The extension of these security controls to MCP allows engineering teams to develop technology that incorporates AI without opening new avenues for unauthorised access to company data. By supporting both machine and user-driven LLM workflows, Teleport states its platform is positioned to accommodate a range of AI integration scenarios while maintaining a strong security posture. Follow us on: Share on: