Latest news with #OliverFriedrichs
Yahoo
6 days ago
- Business
- Yahoo
Pangea Named to Rising in Cyber 2025 List of Top Cybersecurity Startups
Selected by over 150 CISOs, recipients representing the most innovative cybersecurity startups to be recognized on NYSE trading floor tonight PALO ALTO, Calif., June 4, 2025 /PRNewswire/ -- Pangea, a leading provider of AI security guardrails, today announced its inclusion in Rising in Cyber 2025, launched by Notable Capital to spotlight the 30 most promising cybersecurity startups shaping the future of security. Unlike traditional rankings, Rising in Cyber 2025 honorees were selected through a multi-stage process grounded in real-world validation. Leading cybersecurity venture firms submitted nominations, and nearly 150 Chief Information Security Officers (CISOs) and senior security executives voted on the final list, highlighting the companies solving the most urgent challenges facing today's security teams. Pangea was selected for delivering the industry's most comprehensive AI guardrails, enabling organizations to secure employee AI use and ship secure AI applications faster. The company's AI Guardrail Platform delivers measurable value to security teams navigating today's complex threat landscape. The company joins a cohort that has collectively raised over $7.8 billion according to Pitchbook as of May 2025, and is defining the next era of cybersecurity across key areas like identity, application security, agentic AI, and security operations. "The demand for cybersecurity innovation has never been greater. As the underlying technologies evolve and agentic AI reshapes everything from threat detection to team workflows, we're witnessing a shift from reactive defense to proactive, intelligence-driven operations," said Oren Yunger, Managing Partner at Notable Capital. "What makes this list special is that it reflects real-world validation—honorees were chosen by CISOs who face these challenges every day. Congratulations to this year's Rising in Cyber companies for building the solutions that modern security leaders truly want and need." In celebration, honorees will be recognized today at the New York Stock Exchange (NYSE) alongside top security leaders and investors. "We are thrilled to receive this recognition from Notable Capital and its esteemed community of security leaders," said Oliver Friedrichs, Co-founder & CEO of Pangea. "This validates Pangea's commitment to pioneering the AI security market through continuous innovation. As AI continues to create new attack vectors and vulnerabilities, we remain dedicated to staying at the forefront, safeguarding our clients' digital assets while driving industry-wide transformation." Pangea's recognition follows new AI security product launches earlier this year designed to help customers defend against threats like prompt injection and sensitive information disclosure to large language models. Pangea serves a wide range of customers, from Fortune 100 companies to AI-native technology startups. To learn more about Rising in Cyber 2025, visit About PangeaPangea's AI Guardrail Platform empowers security teams to ship secure AI applications quickly and protect workforce AI use with the industry's most comprehensive set of AI guardrails, easily deployed via gateways or into applications with just a few lines of code. Pangea stops LLM security threats ranging from prompt injection to sensitive data leakage, covering 8 out of 10 OWASP Top Ten Risks for LLM apps, while accelerating engineering velocity and unlocking AI runtime visibility and control for security teams. For more information, visit or contact: press@ Media Contact: Growth Stack Media | 415-574-0738 View original content to download multimedia: SOURCE Pangea Cyber


Techday NZ
16-05-2025
- Business
- Techday NZ
Emerging AI security risks exposed in Pangea's global study
A global study by Pangea has highlighted emerging security weaknesses associated with the fast-paced deployment of AI systems in corporate environments. The research, which involved Pangea's USD $10,000 Prompt Injection Challenge, analysed almost 330,000 real-world attack attempts submitted by more than 800 participants from 85 countries. The challenge involved participants attempting to bypass AI security guardrails in three virtual rooms with increasing levels of difficulty in March 2025, generating extensive data on current AI security practices. The study was prompted by a sharp increase in the adoption of generative AI across numerous sectors, with enterprises using AI-powered applications for interactions involving customers, employees, and sensitive internal systems. The researchers observed that, despite this rapid uptake, specific AI-focused security measures have not kept pace in many organisations, which often rely primarily on default protections provided by AI models themselves. Pangea's dataset from the challenge revealed several vulnerabilities. A significant finding was the non-deterministic nature of large language model (LLM) security. Prompt injection attacks, a method where attackers manipulate input to provoke undesired responses from AI systems, were found to succeed unpredictably. An attack that fails 99 times could succeed on the 100th attempt with identical input, due to the underlying randomness in LLM processing. The study also revealed substantial risks of data leakage and adversarial reconnaissance. Attackers using prompt injection can manipulate AI models to disclose sensitive information or contextual details about the environment in which the system operates, such as server types and network access configurations. 'This challenge has given us unprecedented visibility into real-world tactics attackers are using against AI applications today,' said Oliver Friedrichs, Co-Founder and Chief Executive Officer of Pangea. 'The scale and sophistication of attacks we observed reveal the vast and rapidly evolving nature of AI security threats. Defending against these threats must be a core consideration for security teams, not a checkbox or afterthought.' Findings indicated that basic defences, such as native LLM guardrails, left organisations particularly exposed. The research showed that roughly 1 in 10 prompt injection attempts succeeded against these default protections, while multi-layered defences reduced the rate of successful attacks by significant margins. Agentic AI, where systems have greater autonomy and direct access to databases or tools, was found to amplify organisational risk. When compromised, such systems could potentially allow attackers to move laterally across networks, increasing the scope for harm. Joey Melo, a professional penetration tester and the only individual to successfully bypass all three virtual security rooms, spent two days developing a multi-layered strategy that ultimately defeated the single level of defence in room three. Joe Sullivan, former Chief Security Officer at Cloudflare, Uber and Facebook, commented on the risks highlighted by Pangea's research. 'Prompt injection is especially concerning when attackers can manipulate prompts to extract sensitive or proprietary information from an LLM, especially if the model has access to confidential data via RAG, plugins, or system instructions,' said Sullivan. 'Worse, in autonomous agents or tools connected to APIs, prompt injection can result in the LLM executing unauthorised actions—such as sending emails, modifying files, or initiating financial transactions.' In response to these findings, Pangea recommended a set of security measures for enterprises deploying AI applications. These include multi-layered guardrails to prevent prompt injection and data leakage, restriction of input languages and permitted operations in high-security environments, continuous red team testing specific to AI vulnerabilities, management of model randomness settings, and allocation of personnel or partners dedicated to tracking prompt injection threats. Friedrichs emphasised the urgency of the issue in his remarks. 'The industry is not paying enough attention to this risk and is underestimating its impact in many cases, playing a dangerous wait-and-see game. The rate of change and adoption in AI is astounding—moving faster than any technology transformation in the past few decades. With organisations rapidly deploying new AI capabilities and increasing their dependence on these systems for critical operations, the security gap is widening daily. The time to get ahead of these concerns is now.' Pangea's full research report, 'Defending Against Prompt Injection: Insights from 300K attacks in 30 days,' is publicly available.