logo
#

Latest news with #ApexSecurity

Tenable to acquire Apex Security, bolstering AI risk control
Tenable to acquire Apex Security, bolstering AI risk control

Techday NZ

time2 days ago

  • Business
  • Techday NZ

Tenable to acquire Apex Security, bolstering AI risk control

Tenable has announced its intent to acquire Apex Security to expand its exposure management capabilities within the artificial intelligence (AI) attack surface. The planned acquisition is aimed at incorporating Apex Security's technology into Tenable's exposure management platform, as AI adoption accelerates and new cyber risks emerge. Tenable has previously addressed AI-related security concerns through its Tenable AI Aware product, introduced in 2024, which assists organisations in identifying and assessing AI usage across their operations. The integration of Apex Security's capabilities would allow Tenable to move beyond detection and assessment, enabling organisations to govern AI usage, enforce policies, and control exposure risks for both off-the-shelf and in-house-developed AI systems. Generative AI and autonomous systems are contributing to a broader and more complex attack surface, exposing organisations to risks such as shadow AI applications, AI-generated code, synthetic identities, and unregulated cloud services. The expansion of Tenable's exposure management offering comes at a time when cyber risk management is adapting to the pace and scale of AI-driven digital transformation. Steve Vintz, Co-Chief Executive Officer and Chief Financial Officer at Tenable, said: "AI dramatically expands the attack surface, introducing dynamic, fast-moving risks most organisations aren't prepared for. Tenable's strategy has always been to stay ahead of attack surface expansion — not just managing exposures, but eliminating them before they can be exploited." Mark Thurmond, Co-Chief Executive Officer at Tenable, spoke about the proactive need for addressing AI risks. He said: "As organisations move quickly to adopt AI, many recognise that now is the moment to get ahead of the risk — before large-scale attacks materialise. Apex delivers the visibility, context, and control security teams need to reduce AI-generated exposure proactively. It will be a powerful addition to the Tenable One platform and a perfect fit for our preemptive approach to cybersecurity." Apex Security, founded in 2023, has attracted support from Chief Information Security Officers (CISOs) as well as prominent investors such as Sam Altman of OpenAI, Clem Delangue of Hugging Face, and venture capital firms Sequoia Capital and Index Ventures. The company's focus has been on securing AI usage among developers and general staff, helping address policy enforcement, usage management, and compliance challenges linked to AI adoption. Matan Derman, Chief Executive Officer and Co-Founder of Apex Security, commented on the strategic fit with Tenable. He said: "The AI attack surface is deeply intertwined with everything else organisations are already securing. Treating it as part of exposure management is the most strategic approach. We're excited to join forces with Tenable to help customers manage AI risk in context — not as a silo, but as part of their broader environment." Following the completion of the acquisition, Tenable expects to begin delivering integrated capabilities as part of the Tenable One platform during the second half of 2025. Tenable describes Tenable One as an exposure management platform which brings together visibility, context, and management for a range of attack surfaces from IT infrastructure to cloud environments. The financial terms of the deal have not been disclosed. The transaction is expected to close later this quarter, pending customary approvals and closing conditions.

Tenable Announces Intent to Acquire Apex Security
Tenable Announces Intent to Acquire Apex Security

Channel Post MEA

time3 days ago

  • Business
  • Channel Post MEA

Tenable Announces Intent to Acquire Apex Security

Tenable has announced its intent to acquire Apex Security , an innovator in securing the rapidly expanding AI attack surface. Tenable believes the acquisition, once completed, will strengthen Tenable's ability to help organizations identify and reduce cyber risk in a world increasingly shaped by artificial intelligence. Generative AI tools and autonomous systems are rapidly expanding the attack surface and introducing new risks — from shadow AI apps and AI-generated code to synthetic identities and ungoverned cloud services. In 2024, Tenable launched Tenable AI Aware which already helps thousands of organizations detect and assess AI usage across their environments. Adding Apex capabilities will expand on that foundation — adding the ability to govern usage, enforce policy, and control exposure across both the AI that organizations use and the AI they build. This move reinforces Tenable's long-standing strategy of delivering scalable, unified exposure management as AI adoption accelerates. 'AI dramatically expands the attack surface, introducing dynamic, fast-moving risks most organizations aren't prepared for,' said Steve Vintz, Co-CEO and CFO, Tenable. 'Tenable's strategy has always been to stay ahead of attack surface expansion — not just managing exposures, but eliminating them before they can be exploited.' 'As organizations move quickly to adopt AI, many recognize that now is the moment to get ahead of the risk — before large-scale attacks materialize,' said Mark Thurmond, Co-CEO, Tenable. 'Apex delivers the visibility, context, and control security teams need to reduce AI-generated exposure proactively. It will be a powerful addition to the Tenable One platform and a perfect fit for our preemptive approach to cybersecurity.' Founded in 2023, Apex attracted early interest from CISOs and top investors, including Sam Altman (OpenAI), Clem Delangue (Hugging Face), and venture capital firms Sequoia Capital and Index Ventures. The company quickly emerged as an innovator in securing the use of AI by developers and everyday employees alike — addressing the growing need to manage usage, enforce policy, and ensure compliance at scale. 'The AI attack surface is deeply intertwined with everything else organizations are already securing. Treating it as part of exposure management is the most strategic approach. We're excited to join forces with Tenable to help customers manage AI risk in context — not as a silo, but as part of their broader environment,' said Matan Derman, CEO and Co-Founder of Apex Security. Following the acquisition close, Tenable expects to deliver integrated capabilities in the second half of 2025 as part of Tenable One — the industry's first and most comprehensive exposure management platform. The financial terms of the deal were not disclosed. The deal is expected to close later this quarter. 0 0

What Executives Must Know When Harnessing Enterprise AI
What Executives Must Know When Harnessing Enterprise AI

Forbes

time21-03-2025

  • Business
  • Forbes

What Executives Must Know When Harnessing Enterprise AI

Keren Katz is an AI & security leader with 10 years in management & hands-on roles, leads Security Detection at Apex Security. getty Today, almost every enterprise is impacted by generative AI (GenAI). For many, the initial focus was on leveraging GenAI to enhance daily business processes, accelerating content creation, analysis and communication. However, in 2025, the landscape evolved dramatically with the rise of GenAI-powered copilots, agents and enterprise applications—all fueled by organizational data sources. Leading examples include Microsoft 365 Copilot, Gemini for Google Workspace, Slack AI and Notion AI, all designed to seamlessly integrate into business workflows. Enterprise AI—the use of AI, fueled with enterprise data, to amplify business-critical processes and operations—is reshaping workplace efficiency, making access to internal data faster and more intuitive. Tasks that once took hours or even days-such as creating presentations, analyzing legal documents or making investments decisions—can now be completed in a fraction of the time, allowing employees to focus on high-value tasks that drive business impact. At Apex—we see the tremendous value Enterprise AI users are getting, on a daily basis. And this trend is increasing by the day across all industries—from tech to finance and health—and across all company sizes. Yet, we also see the tremendous risks—with massive opportunities come even greater risks. The same technology that enables faster, smarter decision making also presents significant security and regulatory challenges. Here are four key risks executives need to address: Managing and tracking permissions has always been complex, but with the rise of Enterprise AI this challenge multiples exponentially. AI copilots don't inherently distinguish between restricted and accessible data as permission controls are overlooked—which happens more often than expected. Without strong safeguards, sensitive information can be exposed, putting the organization at risk. Enterprise AI democratizes access to data—but that means curious employees may unknowingly request sensitive information they shouldn't have. In one case that my company observed, engineers and marketers queried an AI copilot for company cash flow reports and financial forecasts—requests that, if granted, could result in catastrophic financial exposure if shared. The risks extend beyond financial data. An employee could query the chat or copilot to get access to colleagues' email content, potentially exposing personal information, client communications or executive discussions. If such a request is approved, it could violate employee privacy, breach client agreements and jeopardize strategic plans. If an attacker compromises even a low-level user's credentials, enterprise AI copilots and applications become an instant threat vector for data leakage. Before enterprise AI, attackers had to move laterally across the network and escalate privileges before accessing sensitive data. With AI copilots, however, a low-level account can simply ask the AI for proprietary information such as financials, legal documents, intellectual property or even critical security credentials that could serve as initial access secrets. Less of a forensic footprint makes detection far more difficult, and the lack of visibility makes it nearly impossible. This significantly lowers the barrier for cyberattacks and increases the speed and efficiency of data theft—sometimes in minutes, before security teams even detect an intrusion. Attackers don't need to breach your network to manipulate AI-generated content. Instead, they can poison AI models or inject false data into the enterprise information that large language models (LLMs) use as context. By compromising enterprise data sources that AI relies on—particularly through retrieval-augmented generation (RAG)—attackers can alter outputs even from outside the network. One method is indirect prompt injection attacks, where something as simple as an email or calendar invite can influence how the AI responds. The real-world implications of these attacks are significant. Malicious actors can inject harmful links into AI-generated emails, enabling highly sophisticated phishing campaigns. AI can also be manipulated to misinform employees, tricking them into authorizing fraudulent financial transactions—such as in CEO injection attacks. Even critical business documents, including financial models, legal agreements or engineering specifications, can be corrupted by manipulated AI suggestions. If AI-generated responses become untrustworthy, enterprise decision-making collapses, leading to reputational damage, financial losses and serious legal consequences. According to Gartner, by 2028, "33% of enterprise software applications will incorporate agentic AI, a significant rise from less than 1% in 2024." As AI capabilities advance, autonomous decision-making will increase—and with it, the risk of unintended or harmful actions. For example, AI agents could mistakenly share sensitive presentations with external recipients, leading to data leakage. In financial settings, an AI system might misinterpret a rule and automatically process an incorrect transaction. There is also the risk of rogue AI agents taking destructive actions due to unpredictable, non-deterministic behavior. This growing 'AI autonomy dilemma' will likely be one of the biggest challenges enterprises face in 2025 and beyond. To harness enterprise AI's power while minimizing risks, enterprises must adopt a proactive security-first approach. Every enterprise AI transaction—whether through copilots, agents or enterprise applications—should be logged, monitored and auditable to ensure transparency and security. It is essential to implement detection mechanisms that can identify and block malicious AI-generated content before it reaches users. Additionally, enterprises should use AI-specific security solutions to detect and prevent incidents of data exposure and leakage in AI-generated outputs. AI agents should be closely monitored to ensure they cannot execute actions without human verification. For critical operational decisions, enterprises should require multilayered approvals before allowing AI to take action. Enterprise AI is not just another trend—in fact, I believe it's the defining technological shift of this decade. As an executive, your role is not just to drive AI adoption but to ensure it scales safely so that the rewards outweigh the risks. By embracing AI with strong security foundations, organizations can better position themselves to maximize AI's potential without compromising trust or compliance. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store