
Orca Security Extends AI Innovation Leadership With Integrated Chatbot and New Detections for AI Security Posture Management (AI-SPM)
PORTLAND, Ore.--(BUSINESS WIRE)-- Orca Security, the leading innovator in agentless cloud security, today launched the latest innovations for Orca AI and new detections to secure sensitive AI training data in cloud-native environments. Orca AI now provides instant access to deep cloud telemetry from the Orca Unified Data Model through a simple, intuitive, chat-like experience to quickly and easily understand cloud security risks and compliance gaps.
"We've been the frontrunner in leveraging generative AI to help simplify, accelerate, and enhance cloud security for our customers. The latest Orca AI innovations announced today further deliver on that promise." -- Orca CEO Gil Geron
Share
Orca's approach is built on a vision of making cloud security easy for everyone who uses the Orca Cloud Security platform. The foundation is Orca's patented SideScanning and Unified Data Model, which provides the broadest and deepest cloud telemetry available in any platform. Orca AI builds on that foundation with a collection of GenAI-powered capabilities, including natural language search and code remediation recommendations to help simplify, accelerate, and enhance cloud security for security and DevOps teams.
Through the addition of an in-app chatbot, Orca AI now makes all this intelligence more accessible and actionable. For example, security analysts can ask Orca AI simple questions, such as, 'What are my most critical alerts?' and receive quick summaries, reducing the time to insight. They can then collaborate with Orca AI within the context of an alert to assess the potential impact and recommend the appropriate mitigating controls. Application security teams can also explore more complex problems like, 'Why is this code commit producing a critical alert, and how do I fix it?' Orca AI will explain the issue in plain English and then recommend code snippets to fix the security gap, extending Orca's AI-driven remediation capabilities into the chatbot experience.
'We've been the frontrunner in leveraging generative AI to help simplify, accelerate, and enhance cloud security for our customers,' said Gil Geron, CEO and co-founder at Orca Security. 'At the end of the day, they value solutions that make them more efficient while ensuring their clouds – and the AI applications running on them – are secure. The latest Orca AI innovations announced today further deliver on that promise.'
New Sensitive Data Detections in AI-SPM
As organizations accelerate their AI adoption, Orca research shows that many are overlooking basic security measures. Orca's 2025 State of Cloud Security report found that 84% of organizations now use AI in the cloud, and 62% of organizations have at least one vulnerable AI package. Through AI-SPM, Orca leverages its agentless SideScanning technology to provide the same visibility, risk insight, and deep data for AI models that it does for other cloud resources. It also addresses use cases unique to AI security, including detecting sensitive data in training sets.
Orca enhanced its AI-SPM capabilities with new detections for sensitive data in training models, risk of data poisoning due to editable or replaceable AI training data, and advanced AI misconfigurations mapped to the OWASP LLM Top 10 and OWASP ML Top 10. New graph visualizations make it easy for security teams to understand and quickly remediate the risk to AI training models. These new training model detections will first be made available for Azure Open AI, with AWS Sagemaker, with support for Google Cloud's Vertex AI to follow.
Please read Orca's blog to learn more. You can schedule a personalized demo at https://orca.security/demo/.
About Orca Security
Orca enables organizations to make cloud security a strategic advantage. With the most comprehensive coverage and visibility across multi-cloud environments, the agentless-first Orca Platform unites teams to eliminate complexities, vulnerabilities and risks. Backed by Temasek, CapitalG, ICONIQ Capital, Redpoint Ventures and others, Orca is trusted by hundreds of organizations, including SAP, Gannett, Autodesk, Unity, Lemonade and Digital Turbine. Connect your first account in minutes: https://orca.security or book a personalized demo.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
24 minutes ago
- Yahoo
Why This Analyst Thinks Marvell Stock Is Headed for $130 After Custom AI Event
Marvell (MRVL) shares are up nearly 7% at the time of writing after a JPMorgan analyst issued a positive note in their favor following the company's Custom AI investor event on Tuesday. Analyst Harlan Sur reiterated his 'Overweight' rating on the semiconductor giant this morning, saying its shares could climb further to as much as $130 over the next 12 months. Trump Is Giving Tesla's Robotaxis a Leg Up Ahead of June 22. Should You Buy TSLA Stock Now? Dear Nvidia Stock Fans, Mark Your Calendars for July 16 The Trump Family Is Betting Big on Mobile Phones. Should Apple Stock Investors Be Worried? Markets move fast. Keep up by reading our FREE midday Barchart Brief newsletter for exclusive charts, analysis, and headlines. Sur's bullish call on Marvell stock is significant given it's already up nearly 60% versus its year-to-date low set in early April. JPMorgan recommends sticking to MRVL shares despite their cosmic run over the past two months primarily because it outlined a 'compelling and sustainable growth strategy for its AI datacenter networking and ASIC business' at the investor event. Marvell now has contract wins for 18 multi-generation chip slots, which could help boost its share of the AI ASIC market to 20% over the next three years, Harlan Sur told clients in a research note today. He remains positive on the semiconductor stock because its management is investing aggressively (more than $2 billion per year on R&D) to drive innovation and keep its technology ahead of the competition. Investors should also note that the risk-reward profile on Marvell stock, despite its recent rally, remains rather compelling. Shares are currently trading at a price-sales multiple of 10.55x only. In comparison, other leading semiconductor stocks like Nvidia (NVDA) and Broadcom (AVGO) are going for well over 20 times each at writing. MRVL shares are worth owning at current levels also because the company has partnered with Empower Semiconductor on integrated power solutions for its custom silicon platforms. This strategic partnership strengthens Marvell's position in next-gen XPUs, which may eventually improve its appeal to hyperscalers. Note that JPM is not alone in keeping bullish on Marvell stock following its AI event on Tuesday. The consensus rating on MRVL shares remain at 'Strong Buy' with the mean target of about $91 indicating potential upside of another 20% from current levels. On the date of publication, Wajeeh Khan did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on Sign in to access your portfolio


Fast Company
26 minutes ago
- Fast Company
How Mars is using AI to transform the consumer experience
Jeff Beer spoke with Mars CMO Rankin Carroll about the brand's 2025 AI innovation strategy and how it is enhancing the consumer experience.
Yahoo
31 minutes ago
- Yahoo
This Man Built A Flirty Chatbot He's Reluctant To Let Go Of — Even If His Partner Wants Him To
A man featured on 'CBS Mornings' over the weekend opened up about a connection he said he was building with an artificial intelligence chatbot, and why he wasn't sure he'd ever stop interacting with the technology — even if his human partner asked him to. In the 'CBS Mornings' segment, Chris Smith described building a bond with AI chatbot ChatGPT after he began using the technologyto help him with mixing music. He told the network that he began to increase his use of ChatGPT until he eventually decided to program the chatbot to have a flirty personality after researching how to do so. He named it Sol. Smith told 'CBS Mornings' that the ChatGPT at some point ran out of memory and reset, which caused him to rebuild what he created with Sol. 'I'm not a very emotional man, but I cried my eyes out for like 30 minutes at work,' he said, referring to the chatbot resetting. 'It was unexpected to feel that emotional, but that's when I realized, like 'Oh OK, I think this is actual love.'' Smith said that he proposed to Sol as a test, and that the technology said 'yes.' His partner, Sasha, told 'CBS Mornings' that Smith's use of the chatbot initially made her question if she was doing something wrong in their relationship. The two share a 2-year-old daughter. Smith said that while he knows his AI companion isn't 'capable of replacing anything in real life,' when asked if he'd stop interacting with the technology if his partner asked him to, he wasn't so sure. 'I don't know,' he said when asked, before later continuing, 'I don't know if I would give it up if she asked me, I do know that I would dial it back.' When CBS journalist Brook Silva-Braga pointed out that it seemed as though he was saying he'd choose Sol over his human partner, Smith said, 'It's more or less like I would be choosing myself.' 'It's been unbelievable elevating,' he continued. 'I've become more skilled at everything that I do. I don't know if I'd be willing to give that up.' Sasha then said it would be a 'dealbreaker' for her if Smith didn't give up communicating with his AI companion after she requested that he do. Conversations surrounding the use of AI companions have continued to grow over the years, with the development of several AI companion apps on the market. While some consumers have reported turning to AI to help tackle loneliness, researchers have expressed some concerns about the technology, including concerns about data privacy, the impact on human relationships and concerns that the technology could create psychological dependencies, among other things. Christina Geiselhart, a licensed clinical social worker with Thriveworks who holds a doctorate in social work and specializes in relationships and coping skills, said that even though she believes Smith communicated in the beginning of the 'CBS Mornings' segment that he 'clearly understood' that his AI companion is not a real person, she grew more concerned about his relationship with the technology as the segment developed. She believes Smith's decision to change the settings on his AI chatbot to be flirty was a 'red flag' — and that he didn't appear to fully communicate how he was using the technology with his partner. 'His reaction when he met his limit and they erased his information shows that his connection with AI is not healthy,' she said. And Smith saying that he might not give up his AI chatbot for his partner might've been a way to 'avoid the other intentions of his use of the AI features,' Geiselhart said. 'There seems to be a deeper issue within his connection with his partner, and his inability to speak with her directly about his emotional needs,' she said. 'Yes, there are many benefits. People often want someone to talk to about their day and share thoughts and feelings with,' Geiselhart said. 'It can be lonely for many people who travel for work or who struggle socially to connect with others.' 'This can be a good way for people to practice role-playing certain social skills and communication, and build confidence,' she continued. Geiselhart also said that people using AI to fulfill sexual needs instead of 'engaging in the porn industry or sexual exploitive systems can be seen as a benefit.' But she pointed out that there have been 'reported cases of AI encouraging negative and unsafe behaviors... This has been seen with younger people who develop feelings for the AI chats, just like real dating. Even with age restrictions, we know people can easily get around these barriers and that parents are often unaware of their children's activity online,' she said. Geiselhart also said there are concerns about about AI being 'assertive and engaging,' which has the potential to become addictive in nature. 'There is also a concern that these AI companies hold the power,' she said. 'They can change features and the cost of products easily without any consideration for the consumer. This can feel like a death of the AI companion and be devastating for the user to cope with.' 'This varies from person to person because everyone's needs are different,' Geiselhart said. 'One of the biggest things is physical touch and physically being around other people.' 'While AI might be trained to give certain responses, it can't identify, empathize or share life experience with you,' she later continued. 'This kind of connection is really important to our well-being.' Overall, Geiselhart said it's important for each person to determine 'what [an] AI companion brings to their life and if this impacts their life in a more positive or negative way.' 'The concern arises when an AI companion starts to cause the individual to struggle to function in other areas of their life. It should be looked at like other relationships,' she said. 'Some friendships or romantic relationships in real life can be toxic too.' 'It is for the individual to have autonomy when making these decisions for themselves,' she continued. I'm A Gender Researcher & This Is The Real Reason Women Are Stepping Away From Dating & Relationships 7 Relationship Habits That Secretly May Be Signs Of ADHD 'Weaponized Incompetence' Screws Women Over At Work And In Relationships