21-03-2025
What Executives Must Know When Harnessing Enterprise AI
Keren Katz is an AI & security leader with 10 years in management & hands-on roles, leads Security Detection at Apex Security.
getty
Today, almost every enterprise is impacted by generative AI (GenAI). For many, the initial focus was on leveraging GenAI to enhance daily business processes, accelerating content creation, analysis and communication. However, in 2025, the landscape evolved dramatically with the rise of GenAI-powered copilots, agents and enterprise applications—all fueled by organizational data sources.
Leading examples include Microsoft 365 Copilot, Gemini for Google Workspace, Slack AI and Notion AI, all designed to seamlessly integrate into business workflows.
Enterprise AI—the use of AI, fueled with enterprise data, to amplify business-critical processes and operations—is reshaping workplace efficiency, making access to internal data faster and more intuitive. Tasks that once took hours or even days-such as creating presentations, analyzing legal documents or making investments decisions—can now be completed in a fraction of the time, allowing employees to focus on high-value tasks that drive business impact.
At Apex—we see the tremendous value Enterprise AI users are getting, on a daily basis. And this trend is increasing by the day across all industries—from tech to finance and health—and across all company sizes. Yet, we also see the tremendous risks—with massive opportunities come even greater risks.
The same technology that enables faster, smarter decision making also presents significant security and regulatory challenges.
Here are four key risks executives need to address:
Managing and tracking permissions has always been complex, but with the rise of Enterprise AI this challenge multiples exponentially. AI copilots don't inherently distinguish between restricted and accessible data as permission controls are overlooked—which happens more often than expected. Without strong safeguards, sensitive information can be exposed, putting the organization at risk.
Enterprise AI democratizes access to data—but that means curious employees may unknowingly request sensitive information they shouldn't have. In one case that my company observed, engineers and marketers queried an AI copilot for company cash flow reports and financial forecasts—requests that, if granted, could result in catastrophic financial exposure if shared.
The risks extend beyond financial data. An employee could query the chat or copilot to get access to colleagues' email content, potentially exposing personal information, client communications or executive discussions. If such a request is approved, it could violate employee privacy, breach client agreements and jeopardize strategic plans.
If an attacker compromises even a low-level user's credentials, enterprise AI copilots and applications become an instant threat vector for data leakage.
Before enterprise AI, attackers had to move laterally across the network and escalate privileges before accessing sensitive data. With AI copilots, however, a low-level account can simply ask the AI for proprietary information such as financials, legal documents, intellectual property or even critical security credentials that could serve as initial access secrets.
Less of a forensic footprint makes detection far more difficult, and the lack of visibility makes it nearly impossible. This significantly lowers the barrier for cyberattacks and increases the speed and efficiency of data theft—sometimes in minutes, before security teams even detect an intrusion.
Attackers don't need to breach your network to manipulate AI-generated content. Instead, they can poison AI models or inject false data into the enterprise information that large language models (LLMs) use as context.
By compromising enterprise data sources that AI relies on—particularly through retrieval-augmented generation (RAG)—attackers can alter outputs even from outside the network. One method is indirect prompt injection attacks, where something as simple as an email or calendar invite can influence how the AI responds.
The real-world implications of these attacks are significant. Malicious actors can inject harmful links into AI-generated emails, enabling highly sophisticated phishing campaigns. AI can also be manipulated to misinform employees, tricking them into authorizing fraudulent financial transactions—such as in CEO injection attacks.
Even critical business documents, including financial models, legal agreements or engineering specifications, can be corrupted by manipulated AI suggestions. If AI-generated responses become untrustworthy, enterprise decision-making collapses, leading to reputational damage, financial losses and serious legal consequences.
According to Gartner, by 2028, "33% of enterprise software applications will incorporate agentic AI, a significant rise from less than 1% in 2024." As AI capabilities advance, autonomous decision-making will increase—and with it, the risk of unintended or harmful actions.
For example, AI agents could mistakenly share sensitive presentations with external recipients, leading to data leakage. In financial settings, an AI system might misinterpret a rule and automatically process an incorrect transaction. There is also the risk of rogue AI agents taking destructive actions due to unpredictable, non-deterministic behavior.
This growing 'AI autonomy dilemma' will likely be one of the biggest challenges enterprises face in 2025 and beyond.
To harness enterprise AI's power while minimizing risks, enterprises must adopt a proactive security-first approach.
Every enterprise AI transaction—whether through copilots, agents or enterprise applications—should be logged, monitored and auditable to ensure transparency and security.
It is essential to implement detection mechanisms that can identify and block malicious AI-generated content before it reaches users. Additionally, enterprises should use AI-specific security solutions to detect and prevent incidents of data exposure and leakage in AI-generated outputs.
AI agents should be closely monitored to ensure they cannot execute actions without human verification. For critical operational decisions, enterprises should require multilayered approvals before allowing AI to take action.
Enterprise AI is not just another trend—in fact, I believe it's the defining technological shift of this decade.
As an executive, your role is not just to drive AI adoption but to ensure it scales safely so that the rewards outweigh the risks. By embracing AI with strong security foundations, organizations can better position themselves to maximize AI's potential without compromising trust or compliance.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?