Latest news with #ItamarGolan


TECHx
06-08-2025
- Business
- TECHx
AI Security Enhanced as SentinelOne Buys Prompt Security
Home » Tech Value Chain » Global Brands » AI Security Enhanced as SentinelOne Buys Prompt Security SentinelOne, an AI security, announced it has signed a definitive agreement to acquire Prompt Security. Prompt is a pioneer in securing AI in runtime, preventing AI-related data leakage, and protecting intelligent agents. The acquisition supports SentinelOne's strategy to expand its AI-native Singularity Platform. It aims to secure the fast-growing use of generative (GenAI) and agentic AI in workplaces. The platform offers real-time visibility into AI tool access and data usage. It also enables automated enforcement to stop prompt injection, data leakage, and misuse without slowing down innovation. With Prompt Security's capabilities, SentinelOne plans to give CISOs and IT leaders greater control to enable secure AI adoption at scale. This move also opens new growth and platform expansion opportunities for SentinelOne and its partners. Prompt Security provides organizations with immediate visibility into enterprise-wide GenAI usage. It helps secure employee interactions with AI tools such as ChatGPT, Gemini, Claude, Cursor, and custom LLMs. Its technology eliminates shadow AI risks without sacrificing security or visibility. The combined technologies will enhance SentinelOne's endpoint, cloud, data, and SecOps offerings with advanced AI defense. According to SentinelOne, the acquisition offers: Real-time visibility into AI usage across the enterprise. Policy-based controls to block high-risk prompts and prevent data leaks. AI attack prevention, including prompt injection and model abuse. Prompt's model-agnostic design supports OpenAI, Anthropic, Google, and self-hosted models. Its platform integrates across browsers, desktop apps, and APIs to offer observability and enforcement. It also protects at the point of interaction, unlike traditional security tools. This acquisition cements SentinelOne's leadership in securing AI, from infrastructure to usage. The company was an early adopter of agentic and GenAI in cybersecurity operations. It aims to simplify threat investigation and empower analysts in the SOC. Tomer Weingarten, CEO of SentinelOne, said the partnership will enable organizations to embrace GenAI and agentic AI without compromising security. Itamar Golan, CEO and co-founder of Prompt Security, said AI security is becoming an operational concern. He noted that combining Prompt's platform with SentinelOne's scale can make AI security accessible to all enterprises. SentinelOne will acquire Prompt for a mix of cash and stock. The transaction is expected to close in the company's third fiscal quarter of 2026. It is subject to regulatory approvals and customary conditions. The companies warned that forward-looking statements involve risks and uncertainties. These include integration challenges, competitive pressures, and the broader macroeconomic environment.


Forbes
04-04-2025
- Business
- Forbes
How Agentic AI Is Revolutionizing Security—And How To Keep It Safe
It's important to empower the future of automation with agentic AI, while safeguarding against ... More emerging security risks. One of the most promising developments in technology today is agentic AI—the evolution of AI tools that can perform complex, multi-step tasks autonomously and make contextual decisions with minimal human intervention. Unlike the standard generative AI models that have been the primary focus since ChatGPT came onto the scene, agentic AI is designed to operate independently, executing high-level commands and learning from its experiences. This capability holds immense potential across industries, from automating software development to revolutionizing cybersecurity operations. However, as AI systems begin to take on more autonomy, the security challenges they present must be addressed proactively. AI agents are no longer limited to simple, reactive tasks like text generation or code completion. They now possess the ability to execute complex workflows, adapt to new situations and make decisions on the fly. Itamar Golan, CEO and co-founder of Prompt Security, noted, 'Agentic AI differs from traditional GenAI tools in their ability to independently perform multi-step tasks and make contextual decisions.' This ability to autonomously complete tasks is not just a time-saver; it can fundamentally transform how organizations approach operations, particularly in IT and security. A prime example of agentic AI in action comes from Amazon Web Services, where AI agents were used to automate the transition of Java applications from older versions to Java 17. Chris Betz, CISO of AWS, explained, 'It's not just a recompile. You actually have to go through and rewrite the code to make it Java 17 compatible.' This process, which would traditionally require weeks of effort from developers for each application, was completed in a fraction of the time by leveraging agentic AI. These tools allow developers to focus on more innovative tasks, while AI handles the heavy lifting of routine updates and transitions. Betz estimated that AWS saved about 4500 years of developer work by going through building this tool. That said, the rise of agentic AI also introduces new risks, particularly around security and control. As Patrick Xu, co-founder and CTO at Aurascape AI, notes, 'With the advent of agentic AI, these technologies naturally become attractive targets for malicious actors. We can expect attackers to continuously innovate and devise novel ways to exploit AI-driven systems.' This new attack surface requires robust safeguards to ensure that AI agents operate securely within their designated tasks. While agentic AI promises significant operational efficiency, it also brings security risks that cannot be ignored. These risks stem from the AI's ability to execute actions without human oversight, its broad system access and its real-time decision-making loops. To mitigate these challenges, organizations must implement a comprehensive security framework. 1. Authentication and Authorization As agentic AI agents gain more responsibilities, ensuring strict control over what they can access is crucial. This means implementing proper authentication and authorization protocols to prevent unauthorized access to critical systems. According to Ariful Huq, co-founder at Exaforce, 'A critical enabler for secure, agentic AI is robust identity and permission management that establishes clear provenance for every action an AI agent takes on a user's behalf.' Ensuring that agents can only access the resources they need is key to minimizing potential security risks. 2. Output Validation One of the most critical components of AI security is output validation. Just as user input is considered untrusted until validated, AI-generated output must undergo rigorous scrutiny before being acted upon. AI systems, like any software, are prone to errors, and their autonomous nature means these errors can have widespread impacts if left unchecked. Proper validation ensures that AI outputs are reliable and aligned with organizational standards. 3. Sandboxing AI agents should never be allowed to execute code or perform tasks in a live environment without first being tested in a controlled, isolated sandbox. Sandboxing allows organizations to catch any errors or unexpected behaviors before they affect production systems. By implementing this practice, organizations can ensure that AI-generated actions are safe and do not pose a threat to the larger system. 4. Transparent Logging Transparency is essential for maintaining control over AI actions. Detailed logging of every step an AI agent takes allows security teams to understand how decisions are made and track any potential issues. This is particularly important for accountability and troubleshooting. 'When you have an AI agent, you want to know what it did and how it got there,' says Chris Betz. Detailed logs provide the insight needed to diagnose problems and improve security practices over time. 5. Continuous Testing and Monitoring Given the evolving nature of AI, continuous security testing is essential. Organizations should implement red-teaming and penetration testing to assess vulnerabilities within their AI systems and ensure they are resistant to new threats. As Ori Bendet, VP of product management at Checkmarx, highlights, 'With agentic AI, automated security is easy, securing the automation process is harder.' Ongoing testing and monitoring help ensure that AI systems remain secure as they evolve. As with all AI technologies, agentic AI raises important ethical concerns. One of the most pressing issues is the potential for AI to inherit biases from its training data. AI agents, when trained on biased or incomplete data, can make flawed or discriminatory decisions. In cybersecurity, for example, AI systems used to monitor network traffic or respond to incidents could potentially introduce new risks if they misinterpret their tasks or make biased decisions. Nicole Carignan, SVP at Darktrace, warns, 'Without proper oversight, agentic AI may misinterpret their tasks, leading to unintended behaviors that could introduce new security risks.' Organizations must remain vigilant in ensuring that AI agents are trained on high-quality, unbiased data and are regularly audited for fairness and accuracy. The autonomous nature of agentic AI means that these systems can be manipulated, much like human employees. Just as attackers use social engineering to trick people, AI agents can be tricked into executing malicious actions. Guy Feinberg, growth product manager at Oasis Security, points out, 'The real risk isn't AI itself, but the fact that organizations don't manage these non-human identities (NHIs) with the same security controls as human users.' Organizations must treat AI agents like human identities, assigning them appropriate permissions, monitoring their activity and implementing clear policies to prevent abuse. Despite their growing autonomy, agentic AI systems should be seen as tools that augment human capabilities, not as replacements for human oversight. While AI agents can handle repetitive and time-consuming tasks, human judgment is still required to ensure that outputs align with organizational goals and ethical standards. As Chris Betz notes, 'AI is here to make people go better and faster, not to replace them. It's about augmentation, not replacement.' For businesses to fully realize the potential of agentic AI, they must maintain a balance between automation and human oversight. By leveraging AI to handle routine tasks, organizations can free up human employees to focus on more strategic, creative and high-value work. Brian Murphy, CEO of ReliaQuest also stressed that agentic AI can automate many tasks, but human judgment will remain crucial. 'I personally do not believe we are ever going to separate a trained and skilled human in the last mile decision making.' The future of agentic AI holds tremendous promise. As these intelligent systems continue to evolve, they will drive innovation, improve efficiency and create new opportunities for organizations across industries. However, with this power comes significant responsibility. To fully harness the potential of agentic AI, businesses must implement robust security practices, maintain human oversight and ensure that ethical concerns are addressed. By doing so, they can unlock the transformative power of AI while safeguarding their systems against emerging threats. 'Agentic AI represents the next step in automating generative AI. As soon as humans become less prevalent, the risk of failure increases, but with proper safeguards in place, the benefits far outweigh the risks,' says David Benas, principal security consultant at Black Duck. With thoughtful security frameworks and responsible oversight, agentic AI has the potential to transform industries and redefine the way businesses operate—empowering a future where automation and human creativity work hand in hand.