logo
Beware Of Agentic AI's Heel Turn As Corporate Security Villain

Beware Of Agentic AI's Heel Turn As Corporate Security Villain

Forbes2 days ago
Pieter Danhieux is the Co-Founder and Chairman/CEO of Secure Code Warrior.
As fast as generative artificial intelligence and large language models (LLMs) like ChatGPT have permeated business, academia and personal communications, the next phase of AI advancement is poised to just as quickly become part of the engine driving everything from customer service and supply chain management to healthcare and cybersecurity.
Agentic AI brings autonomy to AI systems, building on AI techniques to make decisions, take action and pursue goals independently, or at least with minimal human supervision. Where generative AI can write a report for you based on the prompts you give it, agentic AI can decide when to write the report, what to say in it and to whom to say it. And, it might not even require asking for your permission first.
The technology in its current form is still nascent, but it is being heralded as the next great leap in autonomous systems, boldly performing next-phase functions where previous AI systems could not tread, such as dynamically reconfiguring supply chains in response to natural or manmade emergencies or proactively ensuring that complex IT systems avoid downtime.
Gartner has forecast that by 2028, 33% of enterprise software applications will include agentic AI (in 2024, it was less than 1%), making it possible for 15% of all day-to-day work decisions to be made autonomously.
However, the great promise of agentic AI doesn't come without significant caveats. Its capabilities and autonomy present a potent enterprise threat vector beyond the realm of garden-variety security concerns. Giving self-optimizing, proactive AI systems the keys to perform independent actions can lead to adversarial behaviors, amplified biases that can cause systemic vulnerabilities and questions of accountability in the event of AI-orchestrated breaches or disruptions.
Enterprises need to assert AI governance and ensure that developers are equipped to maintain oversight, with the security skills to safely prompt and review AI-assisted code and commits.
A report by the Open Worldwide Application Security Project (OWASP) points out that agentic AI introduces new or 'agentic variations' of existing threats, some of them resulting from new components in the application architecture for agentic AI.
Among those threats are memory poisoning and tool misuse resulting from the integration of agent memory and tools. Other risks associated with tool misuse include remote code execution (RCE) and code attacks, which can arise from code generation, creating new attack vectors.
Other threats can arise when user identities are involved. For example, a new bug, referred to as a 'confused deputy' vulnerability, has been uncovered involving user identities embedded inside integrated tools and APIs. It can happen when an agentic AI, acting as a deputy to a human user, has higher privileges than the user it is working with at the time. The agent can then be fooled into taking unauthorized actions on behalf of the user. And if an agent doesn't have proper privilege isolation, it may not be able to distinguish between legitimate requests from its lower-privilege users and those that are part of an attack.
To stop this (as well as to prevent hijacking via prompt injections, identity spoofing and impersonation), organizations should be sure to reduce agent privileges when operating on behalf of a user. OWASP also recommends several other key steps, including ways to prevent memory poisoning, AI knowledge corruption and the manipulation of AI agent reasoning.
Meanwhile, enterprises must also be on guard against the rapidly mounting threat from attacks fueled by agentic AI. A report by Palo Alto Networks' Unit 42 detailed how agentic AI can be used to increase 'the speed, scale and sophistication of attacks' that have already been greatly accelerated by AI.
For example, they found that the mean time to exfiltrate (MTTE) data after an attacker gains access to a system dropped from an average of nine days in 2021 to two days in 2024. In one of five cases, MTTE happened in less than an hour. Unit 42 simulated a ransomware attack using AI at every stage of the process. They transitioned from initial compromise to data exfiltration in 25 minutes, representing a 100-fold increase in speed compared to a typical attack.
Agentic AI, with its ability to autonomously perform complex, multi-step operations and adapt its tactics during an attack, will only intensify offensive operations—possibly conducting entire attack campaigns with minimal human intervention in the near future.
Despite the speed, power and sophistication that agentic AI can bring to cyberattacks, enterprises aren't necessarily overmatched. Agentic AI may eventually lead to new styles of attacks, but currently, it appears that it will mostly turbocharge existing, known attacks. Organizations can, as OWASP advises, tighten identity controls and take other steps to prevent memory poisoning and AI corruption. They can also fight fire with fire, using agentic AI to enhance network monitoring and analysis of specific threats.
The foundations of good security need to be bolstered. And in the current environment, that begins with protecting software through secure coding practices performed by proactive developers with verified security expertise. They need to continue their ongoing education programs to effectively apply security best practices at the beginning of the software development lifecycle. People also need new guidance on how to use agentic AI tools safely. Developers with the proficiency to both prompt and review code output are also crucial to ensuring the safe and secure use of agentic AI.
Organizations that do not prioritize uplifting and continuous measurement of developer security skills will find themselves in a precarious position, fighting against a deluge of AI-generated code that is not being utilized with the critical thinking and hands-on threat assessment required to deem it safe, and ultimately realize the productivity gains these tools offer. Security programs must modernize at the breakneck pace at which code is now being delivered.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

I ditched Claude for GPT-5 because of these 5 features — and I know I'll use them every day
I ditched Claude for GPT-5 because of these 5 features — and I know I'll use them every day

Tom's Guide

timea minute ago

  • Tom's Guide

I ditched Claude for GPT-5 because of these 5 features — and I know I'll use them every day

For the past few months, Claude has been my go-to chatbot. It used to be ChatGPT, and was for years, but with Anthropic's latest update, its service just felt better than the competition. However, OpenAI's answer in GPT-5 is now here. And it has received a mixed reception. Some loyal ChatGPT fans just can't get on with the update, and OpenAI at first didn't even let you use the old version once you had GPT-5. That has now changed, and OpenAI has stated that they are working hard to iron out any problems with GPT-5. However, I've since gone back to ChatGPT with this new update, and there are a few reasons why. Since GPT-5 came out, it has already gone through some of the major benchmark tests. These are examinations that AI models can be put through, testing their ability on mathematics, coding, writing, emotional intelligence, and more. So far, GPT-5 has managed to come out on top of a lot of these ranking systems. While it has fallen short on a few of them, including SimpleBench, a test comparing the model against human intelligence, for the most part it is now the leading option in the world of AI. One of the main tasks I use chatbots for is writing. Whether it is examining my own writing to check for errors or improve its quality, or helping me come up with inspiration for a given topic, it has quickly become my favorite part of AI. While ChatGPT was never bad at this, I always preferred the style that Claude would generate. It felt more assured and would take on the stylings that I requested in my prompts. However, one of the main improvements that came with GPT-5 was in the model's creativity and writing prowess. OpenAI claims to have made considerable changes to ChatGPT's ability to write creatively and understand more complicated writing prompts. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. In my tests so far, GPT-5 seems more competent in this area, able to write from multiple perspectives in one piece of text, and truly understand more complicated writing styles. The ability to code through chatbots has become a big selling point in recent months. Each model is competing to be the best one at it, and with GPT-5, OpenAI appears to have, at least for me, taken back the crown. It is a very close race with Grok 4, with a split of rankings showing each of the two in the top spot. However, paired with the other features, like its writing abilities, GPT-5 just about takes the win for me. Coding through AI has become a feature I'm using more and more. While I have mainly been using Grok to do this, the GPT-5 update is making me reconsider this, and hopefully, can match the experience I've been having with xAI's tool. Surprisingly, Claude doesn't have the ability to create images. This seems surprising considering how common this feature has become across the different chatbots on the market, but it plays mainly the focus Anthropic wants Claude to have. However, ChatGPT continues to have one of the best AI image generators on the market. Having all of this in one place helps to make ChatGPT a more compelling sell for me. While I don't use image generation as much, I do find it can be really useful for creating graphics alongside reports generated in deep research. This is something GPT-5 should hopefully be able to do well.

Korea to Spend $150 Billion to Fund President Lee's Policy Goals
Korea to Spend $150 Billion to Fund President Lee's Policy Goals

Bloomberg

time2 minutes ago

  • Bloomberg

Korea to Spend $150 Billion to Fund President Lee's Policy Goals

South Korea unveiled a 210 trillion won ($152 billion) fiscal plan for 2026-2030 to support President Lee Jae Myung's core pledges, with funding to be drawn from enhanced revenue measures and greater spending efficiency. The plan, announced Wednesday by the presidential state affairs planning committee, will allocate funds across 123 projects in innovation, growth, social welfare and foreign policy, with major investments in artificial intelligence, where the government aims to become a top-three AI power. Other areas of priority include biotechnology, healthcare and defense, among others.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store