Latest news with #PieterDanhieux


Forbes
11-08-2025
- Business
- Forbes
Beware Of Agentic AI's Heel Turn As Corporate Security Villain
Pieter Danhieux is the Co-Founder and Chairman/CEO of Secure Code Warrior. As fast as generative artificial intelligence and large language models (LLMs) like ChatGPT have permeated business, academia and personal communications, the next phase of AI advancement is poised to just as quickly become part of the engine driving everything from customer service and supply chain management to healthcare and cybersecurity. Agentic AI brings autonomy to AI systems, building on AI techniques to make decisions, take action and pursue goals independently, or at least with minimal human supervision. Where generative AI can write a report for you based on the prompts you give it, agentic AI can decide when to write the report, what to say in it and to whom to say it. And, it might not even require asking for your permission first. The technology in its current form is still nascent, but it is being heralded as the next great leap in autonomous systems, boldly performing next-phase functions where previous AI systems could not tread, such as dynamically reconfiguring supply chains in response to natural or manmade emergencies or proactively ensuring that complex IT systems avoid downtime. Gartner has forecast that by 2028, 33% of enterprise software applications will include agentic AI (in 2024, it was less than 1%), making it possible for 15% of all day-to-day work decisions to be made autonomously. However, the great promise of agentic AI doesn't come without significant caveats. Its capabilities and autonomy present a potent enterprise threat vector beyond the realm of garden-variety security concerns. Giving self-optimizing, proactive AI systems the keys to perform independent actions can lead to adversarial behaviors, amplified biases that can cause systemic vulnerabilities and questions of accountability in the event of AI-orchestrated breaches or disruptions. Enterprises need to assert AI governance and ensure that developers are equipped to maintain oversight, with the security skills to safely prompt and review AI-assisted code and commits. A report by the Open Worldwide Application Security Project (OWASP) points out that agentic AI introduces new or 'agentic variations' of existing threats, some of them resulting from new components in the application architecture for agentic AI. Among those threats are memory poisoning and tool misuse resulting from the integration of agent memory and tools. Other risks associated with tool misuse include remote code execution (RCE) and code attacks, which can arise from code generation, creating new attack vectors. Other threats can arise when user identities are involved. For example, a new bug, referred to as a 'confused deputy' vulnerability, has been uncovered involving user identities embedded inside integrated tools and APIs. It can happen when an agentic AI, acting as a deputy to a human user, has higher privileges than the user it is working with at the time. The agent can then be fooled into taking unauthorized actions on behalf of the user. And if an agent doesn't have proper privilege isolation, it may not be able to distinguish between legitimate requests from its lower-privilege users and those that are part of an attack. To stop this (as well as to prevent hijacking via prompt injections, identity spoofing and impersonation), organizations should be sure to reduce agent privileges when operating on behalf of a user. OWASP also recommends several other key steps, including ways to prevent memory poisoning, AI knowledge corruption and the manipulation of AI agent reasoning. Meanwhile, enterprises must also be on guard against the rapidly mounting threat from attacks fueled by agentic AI. A report by Palo Alto Networks' Unit 42 detailed how agentic AI can be used to increase 'the speed, scale and sophistication of attacks' that have already been greatly accelerated by AI. For example, they found that the mean time to exfiltrate (MTTE) data after an attacker gains access to a system dropped from an average of nine days in 2021 to two days in 2024. In one of five cases, MTTE happened in less than an hour. Unit 42 simulated a ransomware attack using AI at every stage of the process. They transitioned from initial compromise to data exfiltration in 25 minutes, representing a 100-fold increase in speed compared to a typical attack. Agentic AI, with its ability to autonomously perform complex, multi-step operations and adapt its tactics during an attack, will only intensify offensive operations—possibly conducting entire attack campaigns with minimal human intervention in the near future. Despite the speed, power and sophistication that agentic AI can bring to cyberattacks, enterprises aren't necessarily overmatched. Agentic AI may eventually lead to new styles of attacks, but currently, it appears that it will mostly turbocharge existing, known attacks. Organizations can, as OWASP advises, tighten identity controls and take other steps to prevent memory poisoning and AI corruption. They can also fight fire with fire, using agentic AI to enhance network monitoring and analysis of specific threats. The foundations of good security need to be bolstered. And in the current environment, that begins with protecting software through secure coding practices performed by proactive developers with verified security expertise. They need to continue their ongoing education programs to effectively apply security best practices at the beginning of the software development lifecycle. People also need new guidance on how to use agentic AI tools safely. Developers with the proficiency to both prompt and review code output are also crucial to ensuring the safe and secure use of agentic AI. Organizations that do not prioritize uplifting and continuous measurement of developer security skills will find themselves in a precarious position, fighting against a deluge of AI-generated code that is not being utilized with the critical thinking and hands-on threat assessment required to deem it safe, and ultimately realize the productivity gains these tools offer. Security programs must modernize at the breakneck pace at which code is now being delivered. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Techday NZ
18-06-2025
- Business
- Techday NZ
Secure Code Warrior unveils free AI security rules for developers
Secure Code Warrior has released AI Security Rules on GitHub, offering developers a free resource aimed at improving code security when working with AI coding tools. The resource is designed for use with a variety of AI coding tools, including GitHub Copilot, Cline, Roo, Cursor, Aider, and Windsurf. The newly available rulesets are structured to provide security-focused guidance to developers who are increasingly using AI to assist with code generation and development processes. Secure Code Warrior's ongoing goal is to enable developers to produce more secure code from the outset when leveraging AI, aligning with broader efforts to embed security awareness and best practices across development workflows. The company emphasises that developers who possess a strong understanding of security can potentially create much safer and higher-quality code with AI assistance, compared to those who lack such proficiency. Security within workflow "These guardrails add a meaningful layer of defence, especially when developers are moving fast, multitasking, or find themselves trusting AI tools a little too much," said Pieter Danhieux, Secure Code Warrior Co-Founder & CEO. "We've kept our rules clear, concise and strictly focused on security practices that work across a wide range of environments, intentionally avoiding language or framework-specific guidance. Our vision is a future where security is seamlessly integrated into the developer workflow, regardless of how code is written. This is just the beginning." The AI Security Rules offer what the company describes as a pragmatic and lightweight baseline that can be adopted by any developer or organisation, regardless of whether they are a Secure Code Warrior customer. The rules are presented in a way that reduces reliance on language- or framework-specific advice, allowing broad applicability. Features and flexibility The rulesets function as secure defaults, guiding AI tools away from hazardous coding patterns and well-known security pitfalls such as unsafe use of functions like eval, insecure authentication methods, or deployment without parameterised queries. The rules are grouped by development domain—including web frontend, backend, and mobile—so that developers in varied environments can benefit. They are designed to be adaptable and can be incorporated with AI coding tools that support external rule files. Another feature highlighted is the public availability and ease of adjustment, meaning development teams of any size or configuration can tailor the rules to their workflow, technology stack, or project requirements. This is intended to foster consistency and collaboration within and between development teams when reviewing or generating AI-assisted code. Supplementary content The introduction of the AI Security Rules follows several recent releases from Secure Code Warrior centred around artificial intelligence and large language model (LLM) security. These include four new courses—such as "Coding With AI" and "OWASP Top 10 for LLMs"—along with six interactive walkthrough missions, upwards of 40 new AI Challenges, and an expanded set of guidelines and video content. All resources are available on-demand within the Secure Code Warrior platform. This rollout represents the initial phase of a broader initiative to provide ongoing training and up-to-date resources supporting secure development as AI technologies continue to be integrated into software engineering practices. The company states that additional related content is already in development and is expected to be released in the near future. Secure Code Warrior's efforts align with increasing industry focus on the intersection of AI and cybersecurity, as the adoption of AI coding assistants becomes widespread. The emphasis on clear, practical security rules is intended to help mitigate common vulnerabilities that can be introduced through both manual and AI-assisted programming. The AI Security Rules are publicly available on GitHub for any developers or organisations wishing to incorporate the guidance into their existing development operations using compatible AI tools.