logo
Why AI Is The New Cybersecurity Battleground

Why AI Is The New Cybersecurity Battleground

Forbes6 days ago

AI has evolved from a tool to an autonomous decision-maker, reshaping the landscape of cybersecurity ... More and demanding innovative defense strategies.
Artificial intelligence has quickly grown from a capability to an architecture. As models evolve from backend add-ons to the central engine of modern applications, security leaders are facing a new kind of battlefield. The objective not simply about protecting data or infrastructure—it's about securing the intelligence itself.
In this new approach, AI models don't just inform decisions—they are decision-makers. They interpret, respond, and sometimes act autonomously. That shift demands a fundamental rethink of how we define risk, build trust, and defend digital systems.
Historically, enterprise software was built in layers: infrastructure, data, logic, and presentation. Now, there's a new layer in the stack—the model layer. It's dynamic, probabilistic, and increasingly integral to how applications function.
Jeetu Patel, EVP and GM of security and collaboration at Cisco, described this transformation to me in a recent conversation: 'We are trying to build extremely predictable enterprise applications on a layer of the stack which is inherently unpredictable.'
That unpredictability is not a flaw—it's a feature of large language models and generative AI. But it complicates traditional security assumptions. Models don't always produce the same output from the same input. Their behavior can shift with new data, fine-tuning, or environmental cues. And that volatility makes them harder to defend.
As AI becomes more central to application workflows, it also becomes a more attractive target. Attackers are already exploiting vulnerabilities through prompt injection, jailbreaks, and system prompt extraction. And with models being trained, shared, and fine-tuned at record speed, security controls struggle to keep up.
Patel pointed out that most enterprises take six to nine months to validate a model, but models themselves may only be relevant for three to six months. The math doesn't work.
More models also means more inconsistency—each with different safety thresholds, behaviors, and guardrails. This patchwork of protections creates gaps. The only way forward, Patel argued, is 'a common substrate for security and safety across all models, all agents, all applications, all clouds.'
Given the speed and sophistication of modern threats, legacy QA methods aren't enough. Patel emphasized that red teaming must evolve into something automated and algorithmic. Security needs to shift from periodic assessments to continuous behavioral validation.
He described one such approach as 'the game of 1,000 questions'—an automated interrogation method that probes a model's responses for signs of compromise. This kind of adaptive red teaming reveals how models might be tricked into unsafe behavior through indirect or deceptive prompts. 'We literally jailbroke DeepSeek 100% of the time with the top 50 benchmark prompts,' he noted, 'while OpenAI only broke 26% of the time.'
That kind of differential risk highlights the need for a standard, cross-model framework for runtime enforcement. Models can't be treated as black boxes—they must be monitored, validated, and guided in real time.
The risk doesn't stop at outputs. With the rise of agentic AI—where models autonomously complete tasks, call APIs, and interact with other agents—the complexity multiplies. Security must now account for autonomous systems that make decisions, communicate, and execute code without human intervention.
Patel warns that inter-agent communication creates new threat vectors, as models pass data and instructions between themselves. Without oversight, these interactions could amplify vulnerabilities or obscure malicious activity.
This trend is accelerating. By next year, we could see widespread deployment of agents that complete multi-step workflows with minimal human input. Securing these systems will require a blend of visibility, behavioral heuristics, and real-time enforcement—at a scale the industry has never attempted before.
'As AI gets smarter and more independent, the stakes for keeping it secure get much higher. We have to change how we think about risks and act faster than before,' cautioned Russell Fishman, senior director, global head of solutions product management for AI and modern workloads at NetApp. 'This includes giving close attention to data provenance—ensuring we have visibility into, security of, and confidence in the data used to fine-tune and re-train models, as well as the information driving real-time inference. By tracking and securing this entire 'chain of trust,' we can minimize the risks tied to suboptimal agent responses and protect against increasingly sophisticated attack vectors."
Patel warns that if every model, platform, and enterprise rolls out its own unique security framework, we're heading toward chaos. What's needed is a shared infrastructure—a neutral, interoperable foundation for AI security that spans clouds, vendors, and models.
Recognizing this, Cisco announced the launch of Foundation AI at RSAC 2025—a significant step towards democratizing AI security.
Foundation AI is presented as the first open-source reasoning model specifically designed to enhance security applications. By making this model openly available, Cisco aims to foster a community-driven approach to securing AI systems, encouraging collaboration across the industry to address the complex challenges posed by AI integration.
The introduction of Foundation AI represents a broader industry trend towards open collaboration in AI security. By contributing to the open-source community, Cisco is not only addressing the immediate security concerns associated with AI but also setting a precedent for other organizations to follow suit in fostering transparency and collective problem-solving in the AI era.
Despite AI's power, it doesn't replace human intuition. Patel emphasized that even advanced models struggle to replicate instinct, nuance, and non-verbal reasoning. 'Most of the things you and I engage on,' he said, 'have some level of data—but then a lot of judgment.'
The best systems will be those that augment human expertise, not replace it. We still need people to ask the right questions, interpret the right signals, and make the right calls—especially when AI's recommendations veer into gray areas.
Much like using GPS in a city you already know, humans must retain the ability to validate, override, and refine machine-generated suggestions. AI should be a co-pilot, not an autopilot.
As organizations embed intelligence deeper into their systems, they must also embed trust. That means building models that are accountable. It means validating behavior continuously, not just at release. And it means working together—across companies, disciplines, and platforms—to ensure that AI enhances security without becoming its own liability.
Fishman summed up, "Real-time monitoring, smarter guardrails, and cross-industry collaboration—with transparency at every step—are essential to building trust in AI and safeguarding our digital world.'
AI is already transforming the cybersecurity landscape. The question is whether we can secure that transformation in time. The intelligence layer is here. It's powerful. And it's vulnerable.
Now is the moment to reimagine what security looks like when intelligence is everywhere.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Gavin Newsom, Elon Musk and Marjorie Taylor Greene agree on this one thing
Gavin Newsom, Elon Musk and Marjorie Taylor Greene agree on this one thing

Yahoo

time9 minutes ago

  • Yahoo

Gavin Newsom, Elon Musk and Marjorie Taylor Greene agree on this one thing

SAN FRANCISCO — California Gov. Gavin Newsom and fellow Sacramento Democrats are finding themselves in an awkward yet convenient alliance with MAGA-world figures against President Donald Trump's 'big, beautiful bill.' Just days after leaving the administration, Elon Musk took his opposition to the extreme over Trump's mega budget and reconciliation package, posting on X Tuesday that he 'just can't stand it anymore' with what he called the 'disgusting abomination' that is 'this massive, outrageous, pork-filled Congressional spending bill.' His outburst had Newsom, a frequent sparring partner, chiming in with agreement. 'Couldn't have said it better myself,' the governor responded to Musk. The vote of support from Newsom marks the latest turn in the rocky relationship between him and Musk, who enjoyed a favorable relationship with the governor while growing many of his companies within the state, but had become a regular target of criticism from the blue state leader during his time at the White House. Musk's post, meanwhile, encouraged Republican critics of the reconciliation package like Rep. Thomas Massie (R-Ky.) and Sen. Rand Paul (R-Ky.) to quickly echo his outrage as well. But Newsom also found himself in agreement with conservative firebrand Rep. Marjorie Taylor Greene (R-Ga.) on Tuesday over a particularly thorny provision of the bill: a 10-year ban on the enforcement of AI state regulation introduced by her fellow House Republicans. The Georgia lawmaker threatened to vote against the bill if it returns to the House with the moratorium still included — presenting a potential headache for GOP leadership, who could barely scrounge up the votes the first time. The package is being considered in the Senate, where the measure may not survive scrutiny under the upper chamber's Byrd rule, meant to strip out measures that have no impact or only a negligible one on the budget. 'I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there,' Greene posted on X. 'We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous.' Newsom, who previously expressed fears about overregulating California's booming AI industry, has said he is concerned the moratorium will be 'overcorrecting in the other direction federally.' Four Democrats from the California Legislature joined a letter to Congress from state lawmakers on Tuesday morning to oppose the moratorium on state AI laws, only to see their criticisms echoed by Greene hours later. 'Given the long absence of federal action to address privacy and social media harms, barring all state and local AI laws until Congress acts threatens to setback policymaking and undermine existing enforcement on these issues,' they wrote as part of a bipartisan coalition of 260 state lawmakers. 'We respectfully urge you to reject any provision that preempts state and local AI legislation in this year's reconciliation package.'

Gavin Newsom, Elon Musk and Marjorie Taylor Greene agree on this one thing
Gavin Newsom, Elon Musk and Marjorie Taylor Greene agree on this one thing

Politico

time26 minutes ago

  • Politico

Gavin Newsom, Elon Musk and Marjorie Taylor Greene agree on this one thing

SAN FRANCISCO — California Gov. Gavin Newsom and fellow Sacramento Democrats are finding themselves in an awkward yet convenient alliance with MAGA-world figures against President Donald Trump's 'big, beautiful bill.' Just days after leaving the administration, Elon Musk took his opposition to the extreme over Trump's mega budget and reconciliation package, posting on X Tuesday that he 'just can't stand it anymore' with what he called the 'disgusting abomination' that is 'this massive, outrageous, pork-filled Congressional spending bill.' His outburst had Newsom, a frequent sparring partner, chiming in with agreement. 'Couldn't have said it better myself,' the governor responded to Musk. The vote of support from Newsom marks the latest turn in the rocky relationship between him and Musk, who enjoyed a favorable relationship with the governor while growing many of his companies within the state, but had become a regular target of criticism from the blue state leader during his time at the White House. Musk's post, meanwhile, encouraged Republican critics of the reconciliation package like Rep. Thomas Massie (R-Ky.) and Sen. Rand Paul (R-Ky.) to quickly echo his outrage as well. But Newsom also found himself in agreement with conservative firebrand Rep. Marjorie Taylor Greene (R-Ga.) on Tuesday over a particularly thorny provision of the bill: a 10-year ban on the enforcement of AI state regulation introduced by her fellow House Republicans. The Georgia lawmaker threatened to vote against the bill if it returns to the House with the moratorium still included — presenting a potential headache for GOP leadership, who could barely scrounge up the votes the first time. The package is being considered in the Senate, where the measure may not survive scrutiny under the upper chamber's Byrd rule, meant to strip out measures that have no impact or only a negligible one on the budget. 'I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there,' Greene posted on X. 'We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous.' Newsom, who previously expressed fears about overregulating California's booming AI industry, has said he is concerned the moratorium will be 'overcorrecting in the other direction federally.' Four Democrats from the California Legislature joined a letter to Congress from state lawmakers on Tuesday morning to oppose the moratorium on state AI laws, only to see their criticisms echoed by Greene hours later. 'Given the long absence of federal action to address privacy and social media harms, barring all state and local AI laws until Congress acts threatens to setback policymaking and undermine existing enforcement on these issues,' they wrote as part of a bipartisan coalition of 260 state lawmakers. 'We respectfully urge you to reject any provision that preempts state and local AI legislation in this year's reconciliation package.'

Why Venture Global Rocketed Higher Today
Why Venture Global Rocketed Higher Today

Yahoo

time31 minutes ago

  • Yahoo

Why Venture Global Rocketed Higher Today

Venture Global announced it had commenced work on its third LNG export facility. The company received final permits in May. In addition, a big nuclear energy deal signed today by Meta Platforms indicates strong electricity demand for AI data centers, with natural gas to also be part of the solution. 10 stocks we like better than Venture Global › Shares of LNG export company Venture Global (NYSE: VG) rallied 18.7% on Tuesday, as of 12:56 p.m. ET. Venture Global went public earlier this year, with high ambitions to build out multiple LNG export facilities on the U.S. Gulf Coast. Today, with recent federal export permits in hand, the company announced full mobilization of site work for its third LNG export facility, Calcasieu Pass 2 (CP2). On May 23, Venture Global received a final permit to begin construction of CP2. Last year, the Biden administration had halted permits for new LNG export terminals, pending environmental reviews, as well as reviews as to how more exports could affect domestic prices for natural gas. However, the Trump administration soon "un-paused" permits for new terminals after coming into office, and Venture Global received the permit for its third LNG export facility, CP2, last month. Today, Venture Global announced it had already commenced site work to begin construction of the new CP2 export facility. CEO Mike Sabel noted: With all federal approvals now in hand we are excited to announce that we have launched on-site work for this Project, which is expected to deliver reliable low-cost LNG to the world starting in 2027. I am proud of our team and their relentless commitment to execution which has enabled our company's historic achievements and rapid growth from a start-up to breaking ground on our third LNG export facility since 2019. This was the only company-specific news of the day, so it's a bit curious that the stock is up this much on the announcement, given that the project was likely to commence once the permit had been received last month. But today's upside might also be partly attributed to an announcement from Meta Platforms (NASDAQ: META), which signed a 20-year nuclear power agreement with Constellation Energy (NASDAQ: CEG). While that agreement is for nuclear power and Venture Global produces LNG, the Meta announcement was still an indicator of strong future energy demand needed to power AI data centers. To that end, both nuclear and natural gas will be needed to fill that demand, along with renewables. Before you buy stock in Venture Global, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Venture Global wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $657,385!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $842,015!* Now, it's worth noting Stock Advisor's total average return is 987% — a market-crushing outperformance compared to 171% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 2, 2025 Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Billy Duberstein and/or his clients have positions in Meta Platforms. The Motley Fool has positions in and recommends Constellation Energy and Meta Platforms. The Motley Fool has a disclosure policy. Why Venture Global Rocketed Higher Today was originally published by The Motley Fool Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store