
Pentera unveils AI web attack testing to boost cyber defences
The latest addition introduces AI-driven payload generation and adaptive testing logic, aiming to provide security teams with tools to emulate contemporary threats more effectively. These capabilities are intended to deliver more nuanced and context-aware attack emulation, supporting organisations in validating their defences against increasingly sophisticated, AI-assisted cyberattacks.
Pentera's Chief Product Officer, Ran Tamir, commented on the growing impact of artificial intelligence in the cybersecurity landscape, stating, AI is leveling the playing field, turning even keyboard kiddies into credible threat actors. Leveraging AI, attackers can move faster and with more precision than ever before. With the addition of AI to our adversarial testing arsenal we're giving defenders that same advantage, adapting in real time to new threat patterns and tuning each test with the context needed to uncover what traditional scans miss. We have a strong vision for how AI will permeate throughout the security validation practice, and these additions are only the beginning.
The new capabilities extend Pentera's AI suite, which commenced with the introduction of AI Insight Reporting earlier in the year. Drawing on the experience from that launch, the company is now focusing on the external-facing web attack surface, incorporating AI in several key areas.
AI-driven payload generation
According to Pentera, the system can now generate attack payloads informed by current threat intelligence, allowing for faster emulation of newly discovered attack techniques. By building payloads based on the latest trends, the platform is designed to ensure that testing keeps pace with the evolution of real-world cyberattacks.
PII-aware attack chaining
Another capability highlighted by Pentera is PII-aware attack chaining. The system proactively identifies and extracts exposed Personally Identifiable Information (PII) during testing, automatically leveraging that data within identity threat attack emulations when relevant. This aims to reflect how attackers might exploit such data in actual intrusion attempts.
No language or cultural barriers
Pentera's platform reportedly accommodates variations in language, naming conventions, and terminology across different regions. The company states this enables consistent and accurate attack simulations regardless of regional differences in labelling or structuring of user-facing components. This feature aims to improve the realism and applicability of tests in diverse environments.
System-aware logic
The platform also features system-aware logic within its attack tactics. It can recognise the type of system it is interacting with, attempting the most relevant default credentials based on how authentication is structured in each case. This approach is intended to support more precise and context-driven attack scenarios.
AI security insights reporting
Alongside the AI-based web attack testing, Pentera has also introduced AI-powered security posture reporting specifically for assets exposed externally. These reports analyse historical test data across a selected timeframe, surfacing trends in security posture, regressions, and top remediation priorities.
The AI insights reports are exportable, supporting both technical teams and executives by offering a clear overview of exposure and progress over time. The goal, according to Pentera, is to furnish stakeholders with actionable intelligence to guide security priorities and track the effectiveness of remediation efforts.
Pentera's growing suite of AI tools reflects a broader movement in the cyber defence sector, where rapid advancements in attack automation and adversarial AI present ongoing challenges to enterprise security.
The organisation focuses on supporting security teams by equipping them with assessment and validation functions that align with developments in the threat landscape.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
2 hours ago
- Techday NZ
Forter expands tools for retailers to track agentic AI risks
Forter has announced a product expansion aimed at helping retailers manage the rising prevalence of agentic artificial intelligence (AI) in digital commerce. The company has introduced new capabilities to monitor both agent and human identities and behaviours, enabling businesses to address fraud and risk as the use of agentic AI increases in the marketplace. According to Forter, agentic traffic spiked by 18,510% across its network shortly after the release of ChatGPT Agent, highlighting major changes in how automated bots and AI assistants engage with online retailers. AI agent detection The new release from Forter incorporates advanced AI agent detection tools and browsing identification, a dashboard for real-time monitoring, and industry-wide insights based on its global eCommerce network. Forter states that these tools will give merchants increased visibility into AI-driven activity on their platforms, providing actionable data to help them approach an evolving eCommerce environment securely. Gartner and other research firms forecast that AI agents will be responsible for 20% of digital commerce traffic in the next five years. In light of this, Forter has positioned its new tools as essential for merchants seeking to understand and leverage the emerging opportunities – and challenges – posed by the rapid advent of agentic commerce. "AI-powered commerce is the next frontier," said Forter CEO Michael Reitblat. "Merchants that embrace agentic AI will elevate customer experience and unlock new opportunities - those that don't will fall behind." The expansion arrives at a time when the lines between customer and bot interactions are increasingly blurred. As AI begins to play a more prominent role in commerce, product research, and purchase transactions, Forter notes that legacy approaches to digital engagement, identity management, and fraud prevention are no longer sufficient. Rise in fraudulent activity In its network, Forter has observed a 50% increase in fraudulent activity employing scripted and automated methods designed to rapidly alter identities and manipulate images. The company reports that another growing area of concern is synthetic identity fraud, in which multiple data points, brought together by AI, create false yet convincing identities that can mimic either human or agentic consumers. Discussing the opportunity and challenges presented by AI agents, Cyndy Lobb, Chief Product Officer of Forter, said, "Agentic AI presents an enormous growth opportunity for merchants. What we hear from our customers and the market-at-large is control against fraud and risk in this AI-era is critical to unlocking this opportunity. Our commitment is to give merchants and commerce organizations of all types the confidence to experiment and scale with agents and agent developers." Capabilities outlined According to Forter, the new features include models that differentiate between types of AI agents, tools for detecting AI-assisted shopping behaviour, a real-time Agentic Dashboard, and anonymised insights published at industry scale. Planned future capabilities will support automation of policy creation, further insight into consumer trends, specialised tokenisation for agentic transactions, and deeper integration with other eCommerce systems. Michael Reitblat commented on the company's history and commitment in AI, stating, "Since our founding more than a decade ago, Forter has been at the forefront of AI, building to unlock opportunities for merchants. We were the first to apply neural networks to make real-time identity and fraud decisions possible, co-developed bot identification technology being used by the world's largest brands, the first to recognize the threat and opportunity for enterprise to manage bot access rather than just block it, and now we are at the forefront of agentic commerce while continuing to invest in fraud prevention, payments optimization, and several new areas." Industry perspectives Jenna Flateman Posner, CEO and Founder of Chief Digital Agency and an advisor to Forter, added her perspective on the importance of distinguishing AI activity from legitimate customer behaviour. She said, "As a former retail executive, longtime Forter client and now advisor to the business, I've watched Forter consistently stay ahead of the curve. With AI agents so quickly saturating traffic, getting ahead of determining bad actors vs qualified bot transactions is going to be essential. Retailers need tools that can distinguish between human and agentic behavior without sacrificing customer trust. Forter's agentic AI strategy is exactly what the industry needs to embrace the future of AI-powered commerce confidently and securely." Forter's latest product capabilities are now available to its customers. The company continues to monitor developments in AI-driven commerce and fraud as it develops additional products and integrations.


Techday NZ
2 hours ago
- Techday NZ
Teleport launches Secure MCP to protect AI enterprise workflows
Teleport has announced the general availability of its Secure Model Context Protocol (MCP) for use on the Teleport Infrastructure Identity Platform. The Secure MCP solution seeks to address new security challenges emerging from the rapid adoption of artificial intelligence across enterprises. Recent data from Enterprise Strategy Group indicate that 44% of enterprises have now deployed AI within their organisations. Teleport's Secure MCP is designed to provide security guardrails for AI systems as they interact with databases, MCP servers, and other forms of enterprise data. The Model Context Protocol is an open standard that enables AI models to connect with various tools, databases, or applications using a simplified, universal interface. This is intended to streamline integration in a manner akin to technology standards such as USB-C for physical devices. Despite these integration benefits, MCP was not originally intended with access control, which presents risks around unrestricted data access for AI models. Consequently, there is a need for mechanisms that can provide controlled, audited, and secure access to sensitive data. Teleport's Secure MCP responds to these needs by employing its Infrastructure Identity Platform, which extends existing trust frameworks to AI-based workflows. The platform enforces both Role-Based and Attribute-Based Access Controls (RBAC and ABAC) to manage the resources that large language models (LLMs) can access. Every session involving AI data access is logged, thereby contributing to regulatory compliance and audit readiness. Ev Kontsevoy, Chief Executive Officer of Teleport, commented on the development: "AI is terraforming how software is deployed in organizations. It shouldn't require a major public security incident to motivate business leaders to prepare for this impending challenge. Applying the same access control guardrails for AI, humans, and non-human identities accelerates AI adoption while locking in the protection needed to prevent unauthorized access of data. That's why we launched our secure MCP solution for Teleport, to enable enterprises to confidently unlock AI's innovation without falling prey to its security vulnerabilities and loopholes." Industry analysts have noted a concurrent rise in deployments of AI agents that operate within core enterprise systems, increasing the urgency for businesses to address identity and data security concerns. Todd Thiemann, Principal Analyst for Identity Security & Data Security at Enterprise Strategy Group, highlighted the pressing nature of these issues: "A wave of AI agent deployments that touch on core enterprise systems is in process, and identity teams need to be prepared. Recent Enterprise Strategy Group research showed that data privacy and security for AI agents were major concerns for enterprise security teams. Teleport's Secure MCP solution lays the groundwork for secure agent deployment and enables identity teams to get ahead of the game in securing their AI agent deployments." Secure MCP delivers several key architectural components for AI and MCP deployments. These include Zero Trust Networking, allowing only authenticated clients to interact with MCP servers over encrypted connections. A live MCP server inventory feature allows administrators to discover and register MCP tools across hybrid infrastructure environments automatically. Strict access control ensures that language models are only able to access resources for which they are specifically authorised, while the principle of least privilege means that authorisations are granted on a just-in-time basis for defined tasks. This minimises the potential risk of overprivileged or persistent access by AI models. Additionally, comprehensive audit trails provide a record of every attempt - successful or denied - by LLMs to access data. The extension of these security controls to MCP allows engineering teams to develop technology that incorporates AI without opening new avenues for unauthorised access to company data. By supporting both machine and user-driven LLM workflows, Teleport states its platform is positioned to accommodate a range of AI integration scenarios while maintaining a strong security posture. Follow us on: Share on:


Techday NZ
4 hours ago
- Techday NZ
Michael Parker joins TurinTech to lead Artemis AI expansion
Michael Parker, previously of Docker, has joined TurinTech as Vice President of Engineering to oversee the scaling of the company's Artemis AI engineering platform. Appointment and background Parker brings considerable experience in developer tooling and platform engineering, having held senior roles at Docker, where he was responsible for leading modernisation of the company's cloud platform as well as improving the developer experience. His career includes building scalable systems and managing distributed engineering teams globally. At Docker, Parker was involved in steering the firm's transition from infrastructure-focused solutions to developer-first tooling, leading initiatives such as platform modernisation and overseeing the user experience behind Docker Hub. Role at TurinTech In his new post at TurinTech, Parker will be responsible for engineering delivery across both cloud and on-premises deployments of Artemis. He will focus on integrating AI agents into software development processes, overseeing planning workflows and deploying outcome-based review tools, aiming to enable developers to work seamlessly with AI technologies. TurinTech's Artemis platform is built to support the new era of agentic AI in software development, offering teams guidance, validation of AI contributions, and aligning development work with organisational goals. The platform is structured around an outcome-first approach, prioritising productivity gains that can be measured and verified. Mike Basios, Chief Technology Officer at TurinTech, commented: "We're building Artemis to help teams get the most out of AI - whether that's LLMs, agents, or both. It's not about generating more code - it's about delivering measurably improved outcomes." Parker's appointment comes as TurinTech prepares for a broader rollout of Artemis. The platform is already in use by several global enterprises, including Intel and Taylor Wessing, as part of its limited launch phase earlier this year. Addressing the challenges facing the adoption of agentic AI, Parker emphasised the importance of structured workflows in development environments reliant on AI agents. "Agentic development is a powerful shift, but it needs structure to succeed," said Michael Parker, VP of Engineering. "With Artemis, we're building the planning and workflow intelligence that lets AI agents work more like real teammates. Developers stay in control, but get meaningful support - from scoping to implementation to validation. It's about tackling the real-world friction in today's GenAI tools and making AI genuinely useful in everyday engineering." TurinTech reports growing demand for Artemis, as organisations recognise the need for platforms that not only generate code but also deliver functional, production-ready software with a clear focus on organisational outcomes. Market response Leslie Kanthan, CEO and Co-founder of TurinTech, said that interest in Artemis has expanded since its initial roll-out. He highlighted the significance of Parker's recruitment in supporting the company's ambitions to increase the platform's availability to more teams worldwide. "Demand for Artemis continues to grow since our limited launch earlier this year. Global enterprises like Intel and Taylor Wessing are already engaging, and we're seeing strong developer interest in our AI-driven engineering platform. With Michael onboard, we're excited to accelerate availability and bring the power of Artemis to more teams, faster." As part of the broader expansion, Parker has also recruited former colleagues Johnny Stoten and Diogo Ferreira, who previously held roles at Docker, to further bolster the engineering function at TurinTech. TurinTech focuses on building systems that evolve and improve both code and machine learning models. Its products, including Artemis for code and evoML for machine learning pipelines, use agentic planning, evolutionary algorithms and real-time validation to achieve results that can be measured in a production environment. The aim is to help clients move beyond basic AI generation, facilitating the deployment of software that is robust, efficient and aligned with organisational objectives.