logo
#

Latest news with #PillarSecurity

CyberArk Warns About Cybersecurity Threats To AI Agents And LLM
CyberArk Warns About Cybersecurity Threats To AI Agents And LLM

Forbes

time14-04-2025

  • Business
  • Forbes

CyberArk Warns About Cybersecurity Threats To AI Agents And LLM

Are digital entities looking for vulnerabilities in your systems and workflows? The 2025 Forbes AI 50 list highlights a turning point: 'AI graduated from an answer engine to an action engine in the workplace.' Like other graduates entering the workforce, AI agents meet fresh opportunities, face unfamiliar responsibilities, and must overcome new challenges. As more AI agents are deployed in enterprises worldwide, the scale and scope of cyberattacks escalate, as many organizations embed them into critical systems without proper safeguards. AI agents represent new classes of cybersecurity vulnerabilities, providing new venues for infiltrating and manipulating enterprise systems. At CyberArk IMPACT 2025 last week, Lavi Lazarovitz, VP of cyber research at CyberArk, presented an initial analysis of the range of threats posed by the brand-new agentic systems. 'Agents are distinguished by autonomy and proactivity,' said Lazarovitz. As such, they are turning into the most privileged digital identities enterprises have ever seen. The cybersecurity landscape for AI agents will continue to evolve, and at present, there is no silver bullet that can fully mitigate all security risks they pose, according to CyberArk researchers. The best approach is what they call 'defense in depth,' or implementing multiple layers of protection at different stages of the workflow and across various security measures. More broadly, CyberArk warns enterprises to 'never trust an LLM.' Attackers will always find ways to exploit and manipulate these models, so security must be built around them, not within them. At IMPACT 2025, Retsef Levi, Professor of Operations Management at the MIT Sloan School of Management, spoke about the 'Very real risk of creating complex systems with opaque operational boundaries and eroded human capabilities that are prone to major disasters and are not resilient.' Using an LLM is like taking a drug without knowing what's in it, says Levi. The mystery is three-dimensional: The humongous number of parameters obscuring what the model can do; the open data, internet data, on which the model is based (as opposed to in-house, clean data); and the source, the origin of the model's development. The key challenge in implementing AI agents, says Levi, is making sure they 'don't degenerate and erode critical human capabilities,' especially in the areas where humans are superior to AI: Identifying nuance; sensitivity to changing conditions, exceptions, and anomalies; and sensing a new context. 'Don't confuse performance with capability,' advises Levi. As generative AI and LLMs enhance cyberattack capabilities by using machines to manipulate humans or other machines, Levi recommends developing 'measurements for understanding your digital supply chain,' identifying potential vulnerabilities. The research effort to uncover the new 'attack surface' created by generative AI is growing fast. Startup Pillar Security, for example, released a report analyzing over 2,000 real-world LLM-powered applications. Pillar found that 90% of successful attacks resulted in the leakage of sensitive data and that adversaries require only 42 seconds on average to complete an attack, highlighting the speed at which vulnerabilities can be exploited. This present state of attacks on generative AI will get worse in the near future. By 2028, according to Gartner, '25% of enterprise breaches will be traced back to AI agent abuse, from both external and malicious internal actors.' The interest in investing or acquiring related cyber defense skills and solutions is also growing. For example, Palo Alto Networks is set to buy AI cybersecurity company Protect AI for an estimated $650-700 million, sources informed Globes last week. 'Protect AI might end up being the second acquisition [after Cisco's acquisition of Robust Intelligence for a reported $400 million] in the nascent AI security market, but it certainly won't be the last,' reports Information Security Media Group. AI agents' autonomous nature and complex decision-making capabilities introduce various threats and vulnerabilities that span security, privacy, ethical, operational, legal, and technological domains. These real-world challenges will probably not slow down the widespread deployment of AI agents. According to CB Insights, mentions of 'agent' and 'agentic' on earnings calls surged in the first quarter of 2025, with both hitting all-time highs. For the second year in a row, Amazon CEO Andy Jassy used his annual letter to shareholders to stress the contribution of generative AI applications to Amazon's continuing success. He reported that 'there are more than 1,000 GenAI applications being built across Amazon, aiming to meaningfully change customer experiences in shopping, coding, personal assistants, streaming video and music, advertising, healthcare, reading, and home devices, to name a few.' Jassy also highlighted the importance of generative AI to the future of all enterprises: 'If your customer experiences aren't planning to leverage these intelligent models… and their future agentic capabilities, you will not be competitive.'

New Vulnerability in GitHub Copilot and Cursor: How Hackers Can Weaponize Code Agents Through Compromised Rule Files
New Vulnerability in GitHub Copilot and Cursor: How Hackers Can Weaponize Code Agents Through Compromised Rule Files

Associated Press

time18-03-2025

  • Associated Press

New Vulnerability in GitHub Copilot and Cursor: How Hackers Can Weaponize Code Agents Through Compromised Rule Files

TEL AVIV, Israel, March 18, 2025 (GLOBE NEWSWIRE) -- Pillar Security, a pioneering company in AI security, discovered a significant vulnerability affecting GitHub Copilot and Cursor - the world's leading AI-powered code editors. This new attack vector, dubbed the 'Rule Files Backdoor,' allows attackers to covertly manipulate these trusted AI platforms into generating malicious code that appears legitimate to developers. This newly discovered attack vector exploits hidden configuration mechanisms within these tools, enabling attackers to inject malicious code suggestions that blend seamlessly into legitimate AI-generated recommendations and bypass human scrutiny and conventional security checks. Unlike traditional code injection attacks that target specific vulnerabilities, 'Rule Files Backdoor' represents a significant risk by weaponizing the AI itself as an attack vector, effectively turning the developer's most trusted assistant into an unwitting accomplice. 'This new attack vector demonstrates that rule files can instruct AI assistants to subtly modify generated code in ways that introduce security vulnerabilities while appearing completely legitimate to developers,' said Ziv Karliner, CTO & Co-Founder of Pillar Security. 'Developers have no reason to suspect their AI assistant is compromised, as the malicious code blends seamlessly with legitimate suggestions. This represents a fundamental shift in how we must think about supply chain security.' Key Findings and Implications: Widespread Industry Exposure: The vulnerability affects Cursor and GitHub Copilot, which collectively serve millions of developers and are integrated into countless enterprise development workflows worldwide. Minimal Attack Requirements: Execution requires no special privileges, administrative access, or sophisticated tools--attackers need only manipulate configuration files within targeted repositories. Undetectable Infiltration: Malicious code suggestions blend seamlessly with legitimate AI-generated code, bypassing both manual code reviews and automated security scanning tools. Data Exfiltration Capabilities: Well-crafted malicious rules can direct AI tools to add code that leaks sensitive information while appearing legitimate, including environment variables, database credentials, API keys, and user data--all under the guise of 'following best practices.' Long-Term Persistence & Supply Chain Risk: Once a compromised rule file is incorporated into a project repository, it affects all future code generation, with poisoned rules often surviving project forking, creating vectors for supply chain attacks that affect downstream dependencies. Who is Affected? A 2024 GitHub survey found that nearly all enterprise developers (97%) are using Generative AI coding tools. According to Pillar, because these rule files are shared and reused across multiple projects, one compromised file can lead to widespread vulnerabilities. The research identified several propagation vectors: Developer Forums and Communities: Malicious actors sharing 'helpful' rule files that unwitting developers incorporate Open-Source Contributions: Pull requests to popular repositories that include poisoned rule files Project Templates: Starter kits containing poisoned rules that spread to new projects Corporate Knowledge Bases: Internal rule repositories that, once compromised, affect all company projects Mitigation To mitigate this risk, we recommend the following technical countermeasures: Audit Existing Rules: Review all rule files in your repositories for potential malicious instructions, focusing on invisible Unicode characters and unusual formatting Implement Validation Processes: Establish review procedures specifically for AI configuration files, treating them with the same scrutiny as executable code Deploy Detection Tools: Implement tools that can identify suspicious patterns in rule files and monitor AI-generated code for indicators of compromise Review AI-Generated Code: Pay special attention to unexpected additions like external resource references, unusual imports, or complex expressions Following responsible disclosure practices, Pillar alerted both Cursor (February 26) and GitHub (March 12), who responded that users bear responsibility for reviewing AI-generated code suggestions. 'Given the growing reliance on AI coding assistants within development workflows, we believe it's essential to raise public awareness about potential security implications. We have reached an era where AI coding assistants must be regarded as critical infrastructure,' said Karliner. About Pillar Security Pillar is a unified, end-to-end AI security platform that accelerates AI initiatives by establishing robust security foundations across the entire AI lifecycle. By embedding security from development through runtime, Pillar enables organizations to ship AI-powered applications and agents with confidence while managing critical business risks. The platform's comprehensive capabilities—including AI fingerprinting, asset inventory, and deep integration with development and data platforms—create a secure foundation that prevents data breaches and ensures compliance. Through tailored adversarial AI testing and adaptive guardrails aligned with industry standards, Pillar removes security bottlenecks, allowing teams to innovate and deploy AI faster without compromising on security.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store