logo
Main Line Health Secures CIO 100 Honors Through Deployment of the Elisity-Armis Integration

Main Line Health Secures CIO 100 Honors Through Deployment of the Elisity-Armis Integration

Business Wire23-04-2025

SAN FRANCISCO & SAN JOSE, Calif.--(BUSINESS WIRE)--Armis, the cyber exposure management & security company, and Elisity, a pioneer in identity-based microsegmentation, today announced the results driven by their strategic integration that delivers frictionless, centrally managed Zero Trust access across enterprise networks. With this, both companies would like to congratulate their joint customer Main Line Health, which recently earned the prestigious CIO 100 Award for 2025, following their CSO50 Award in 2024, for their innovative cybersecurity implementation leveraging both platforms.
'The synergy between Armis and Elisity has fortified defenses against targeted cyber threats, improving overall operational efficiency with added layers of security and visibility,' said Aaron Weismann, Chief Information Security Officer for Main Line Health, whose team earned both awards for their innovative cybersecurity work. 'Microsegmentation is a key strategy for accelerating our Zero Trust program.'
Aaron Weismann will be sharing Main Line Health's innovative approaches to cybersecurity at the upcoming RSA Conference (RSAC 2025) in San Francisco. His session, "Dr. Darkness or: How We Learned to Stop Worrying and Love Downtimes" (Thursday, May 1, 8:30 AM - 9:20 AM PDT), will explore the organization's innovative approach to maintaining clinical operations during cyber disruptions. The session, which also features Main Line Health Program Manager Anthony Fiore, will highlight how solutions like the Armis and Elisity integration help healthcare organizations enhance cyber resilience.
The integration provides visibility and control over the entire attack surface, enabling organizations to rapidly implement microsegmentation and prevent lateral movement attacks—the technique used in over 70% of successful breaches, according to recent industry reports.
This powerful combination addresses the critical challenges faced by manufacturing, healthcare, and industrial organizations as cyber attackers increasingly target unprotected east-west traffic across IT, IoT, OT, and IoMT devices. According to Forrester Research, microsegmentation has entered its 'Golden Age' as a crucial strategy for preventing lateral movement, with research showing organizations implementing comprehensive microsegmentation solutions achieve $3.50 in value for every dollar invested.
'By integrating the extensive cyber exposure management capabilities of Armis Centrix™ with Elisity's dynamic policy engine, we're advancing network-segmentation architecture for enterprises pursuing Zero Trust maturity,' said James Winebrenner, CEO of Elisity. 'Organizations can now implement comprehensive microsegmentation in weeks instead of years, rapidly discovering every user, workload, and device on their networks and automating policies that persist wherever those devices appear.'
'Supporting our customers in solving their toughest cybersecurity challenges is our top priority,' said Nadir Izrael, CTO and Co-Founder of Armis. 'The opportunity to come together with industry peers, like Elisity, to integrate our solutions to work better together on behalf of customers makes that mission even more impactful and rewarding. We're proud of our collaboration on behalf of Main Line Health and we will continue to support them as they advance patient safety and drive operational resilience.'
Teddie Wardi, Managing Director at Insight Partners, which holds investments in both companies, added, 'We've seen firsthand how microsegmentation projects often falter due to implementation challenges. This partnership between Armis and Elisity addresses this market gap by delivering a solution that can be quickly and effectively implemented and scaled across thousands of IT and OT environments. The combination of comprehensive cyber exposure management with identity-based microsegmentation creates immediate value for enterprises seeking to enhance their security posture.'
The successful deployment at Main Line Health underscores the deepening strategic integration between Armis and Elisity, focused on collaborating to address critical enterprise security gaps. This integration represents a significant joint investment, uniting Armis' leadership in cyber exposure management with Elisity's leading identity-based microsegmentation to deliver a powerful framework for organizations pursuing Zero Trust maturity. Both companies are committed to continued collaboration to enhance security postures for joint customers facing increasingly complex threats.
This bidirectional integration works by synchronizing Armis' comprehensive asset intelligence and risk quantification with Elisity's identity-based microsegmentation and dynamic policy engine, the integration provides enhanced situational awareness and enables the enforcement of least-privilege access policies crucial for protecting complex environments. This collaboration strengthens defenses against evolving cyber threats. Learn more about the integration, including the bidirectional data flow and its impact on healthcare security, on the Elisity blog: https://www.elisity.com/blog/strengthening-healthcare-security-the-elisity-armis-integration-for-medical-device-microsegmentation.
To schedule a demonstration or book a meeting at RSAC 2025, visit https://www.elisity.com/contact-us
About Elisity
Elisity is a leap forward in network segmentation architecture and is leading the enterprise effort to achieve Zero Trust maturity, proactively prevent security risks, and reduce network complexity. Designed to be implemented in weeks, without downtime, upon implementation, the platform rapidly discovers every user, workload, and device on an enterprise network and correlates comprehensive insights into the Elisity IdentityGraph™. This empowers teams with the context needed to automate classification and apply dynamic security policies to any device wherever and whenever it appears on the network. These granular, identity-based microsegmentation security policies are managed in the cloud and enforced using your existing network switching infrastructure in real-time, even on ephemeral IT/IoT/OT devices. Founded in 2019, Elisity has a global employee footprint and a growing number of customers in the Fortune 500.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Gigamon Showcases the Power of Deep Observability at Cisco Live 2025
Gigamon Showcases the Power of Deep Observability at Cisco Live 2025

Business Wire

time6 days ago

  • Business Wire

Gigamon Showcases the Power of Deep Observability at Cisco Live 2025

SANTA CLARA, Calif.--(BUSINESS WIRE)-- Gigamon, a leading deep observability company, today announced it will showcase its Gigamon Deep Observability Pipeline as a sponsor at Cisco Live 2025, June 8-10, at the San Diego Convention Center. Gigamon will host hands-on demonstrations, consultations, contests, and presentations, featuring technology and channel partners such as Armis, Cribl, Corelight, Endace, Forescout, LiveAction, Nutanix, Sumo Logic, WWT and more to the 20,000 attendees expected to attend the conference. AI will be a key focus at this year's Cisco Live. We look forward to sharing how the Gigamon Deep Observability Pipeline can support organizations, enabling greater agility and performance across a range of workloads, including those driven by AI. Share Amid the rise of AI, threat actors have proven their ability to breach traditional perimeter security where they can capitalize on blind spots and wait for opportune moments to attack. For organizations to better secure and manage hybrid cloud infrastructure, complete visibility into all data in motion is essential. In the newly published 2025 Hybrid Cloud Security Survey of more than 1,000 Security and IT leaders, respondents ranked real-time threat monitoring and visibility across all data in motion as the top priority to optimize defense-in-depth strategies. Attendees will see how the Gigamon Deep Observability Pipeline efficiently delivers network-derived telemetry to cloud, security, and observability tools. This enables organizations to maintain uptime and availability, strengthen security across environments, accelerate cloud migration, and reduce the cost and complexity of hybrid cloud operations through tool rationalization. Attendees visiting Gigamon booth #2227 will experience: Interactive Demos: Experience how deep observability can help eliminate blind spots, including lateral, East-West visibility, and save up to 80 percent in cloud operations costs with our experts sharing the latest solution features and best practices. One-on-One Expert Consultations: Book a personalized meeting with a Gigamon expert to discuss specific challenges, ask questions, and receive tailored advice on securing and managing hybrid cloud infrastructure. In-Booth Presentations: Attend our in-booth theater where the Gigamon team, technology alliance partners, and channel partners will share how to leverage the Gigamon Deep Observability Pipeline for East-West and container visibility, increased NDR effectiveness, TLS decryption, and more. Giveaways/Raffles: Enter daily raffles to have the chance to win one of several Apple Watches and a host of giveaways daily. Gigamon will also present a Content Corner session: ' Because 'It's the Network's Fault' Shouldn't Be Your Default Answer,' discussing how deep observability can empower application owners and security teams with the network-derived telemetry and insights they need to proactively secure and manage hybrid cloud infrastructure. This session will take place at the World of Solutions, Content Corner 1 on Wednesday, June 11, from 11:10 a.m. PT. 'AI will be a key focus at this year's Cisco Live, as organizations are re-evaluating the way their infrastructure and data connect for both security and efficiency,' said Michael Hakkert, vice president of corporate marketing at Gigamon. 'As one of the industry's largest networking conferences, Cisco Live brings together thousands of customers and partners to share best practices as together we work toward more secure and efficient networks. We look forward to sharing how the Gigamon Deep Observability Pipeline can support organizations as they secure and manage hybrid cloud infrastructure, enabling greater agility, resilience, and performance across a range of workloads, including those driven by AI.' To learn more, visit the Gigamon Cisco Live page, and to remain up to date on all Gigamon Cisco Live activities, visit the Gigamon booth, #2227, and follow #GigamonAtCiscoLive on X, LinkedIn, and Facebook. About Gigamon Gigamon ® offers a deep observability pipeline that efficiently delivers network-derived telemetry to cloud, security, and observability tools. This helps eliminate security blind spots and reduce tool costs, enabling you to better secure and manage your hybrid cloud infrastructure. Gigamon serves more than 4,000 customers worldwide, including over 80 percent of Fortune 100 enterprises, 9 of the 10 largest mobile network providers, and hundreds of governments and educational organizations. To learn more, please visit © 2025 Gigamon. All rights reserved. Gigamon and the Gigamon logo are trademarks of Gigamon in the United States and/or other countries. Gigamon trademarks can be found at All other trademarks are the property of their respective owners.

AI cybersecurity risks and deepfake scams on the rise
AI cybersecurity risks and deepfake scams on the rise

Fox News

time27-05-2025

  • Fox News

AI cybersecurity risks and deepfake scams on the rise

Imagine your phone rings and the voice on the other end sounds just like your boss, a close friend, or even a government official. They urgently ask for sensitive information, except it's not really them. It's a deepfake, powered by AI, and you're the target of a sophisticated scam. These kinds of attacks are happening right now, and they're getting more convincing every day. That's the warning sounded by the 2025 AI Security Report, unveiled at the RSA Conference (RSAC), one of the world's biggest gatherings for cybersecurity experts, companies, and law enforcement. The report details how criminals are harnessing artificial intelligence to impersonate people, automate scams, and attack security systems on a massive scale. From hijacked AI accounts and manipulated models to live video scams and data poisoning, the report paints a picture of a rapidly evolving threat landscape, one that's touching more lives than ever before. One of the biggest risks of using AI tools is what users accidentally share with them. A recent analysis by cybersecurity firm Check Point found that 1 in every 80 AI prompts includes high-risk data, and about 1 in 13 contains sensitive information that could expose users or organizations to security or compliance risks. This data can include passwords, internal business plans, client information, or proprietary code. When shared with AI tools that are not secured, this information can be logged, intercepted, or even leaked later. AI-powered impersonation is getting more advanced every month. Criminals can now fake voices and faces convincingly in real time. In early 2024, a British engineering firm lost 20 million pounds after scammers used live deepfake video to impersonate company executives during a Zoom call. The attackers looked and sounded like trusted leaders and convinced an employee to transfer funds. Real-time video manipulation tools are now being sold on criminal forums. These tools can swap faces and mimic speech during video calls in multiple languages, making it easier for attackers to run scams across borders. Social engineering has always been a part of cybercrime. Now, AI is automating it. Attackers no longer need to speak a victim's language, stay online constantly, or manually write convincing messages. Tools like GoMailPro use ChatGPT to create phishing and spam emails with perfect grammar and native-sounding tone. These messages are far more convincing than the sloppy scams of the past. GoMailPro can generate thousands of unique emails, each slightly different in language and urgency, which helps them slip past spam filters. It is actively marketed on underground forums for around $500 per month, making it widely accessible to bad actors. Another tool, the X137 Telegram Console, leverages Gemini AI to monitor and respond to chat messages automatically. It can impersonate customer support agents or known contacts, carrying out real-time conversations with multiple targets at once. The replies are uncensored, fast, and customized based on the victim's responses, giving the illusion of a human behind the screen. AI is also powering large-scale sextortion scams. These are emails that falsely claim to have compromising videos or photos and demand payment to prevent them from being shared. Instead of using the same message repeatedly, scammers now rely on AI to rewrite the threat in dozens of ways. For example, a basic line like "Time is running out" might be reworded as "The hourglass is nearly empty for you," making the message feel more personal and urgent while also avoiding detection. By removing the need for language fluency and manual effort, these AI tools allow attackers to scale their phishing operations dramatically. Even inexperienced scammers can now run large, personalized campaigns with almost no effort. With AI tools becoming more popular, criminals are now targeting the accounts that use them. Hackers are stealing ChatGPT logins, OpenAI API keys, and other platform credentials to bypass usage limits and hide their identity. These accounts are often stolen through malware, phishing, or credential stuffing attacks. The stolen credentials are then sold in bulk on Telegram channels and underground forums. Some attackers are even using tools that can bypass multi-factor authentication and session-based security protections. These stolen accounts allow criminals to access powerful AI tools and use them for phishing, malware generation, and scam automation. Criminals are finding ways to bypass the safety rules built into AI models. On the dark web, attackers share techniques for jailbreaking AI so it will respond to requests that would normally be blocked. Common methods include: Some AI models can even be tricked into jailbreaking themselves. Attackers prompt the model to create input that causes it to override its own restrictions. This shows how AI systems can be manipulated in unexpected and dangerous ways. AI is now being used to build malware, phishing kits, ransomware scripts, and more. Recently, a group called FunkSac was identified as the leading ransomware gang using AI. Its leader admitted that at least 20% of their attacks are powered by AI. FunkSec has also used AI to help launch attacks that flood websites or services with fake traffic, making them crash or go offline. These are known as denial-of-service attacks. The group even created its own AI-powered chatbot to promote its activities and communicate with victims on its public website.. Some cybercriminals are even using AI to help with marketing and data analysis after an attack. One tool called Rhadamanthys Stealer 0.7 claimed to use AI for "text recognition" to sound more advanced, but researchers later found it was using older technology instead. This shows how attackers use AI buzzwords to make their tools seem more advanced or trustworthy to buyers. Other tools are more advanced. One example is DarkGPT, a chatbot built specifically to sort through huge databases of stolen information. After a successful attack, scammers often end up with logs full of usernames, passwords, and other private details. Instead of sifting through this data manually, they use AI to quickly find valuable accounts they can break into, sell, or use for more targeted attacks like ransomware. Get a free scan to find out if your personal information is already out on the web Sometimes, attackers do not need to hack an AI system. Instead, they trick it by feeding it false or misleading information. This tactic is called AI poisoning, and it can cause the AI to give biased, harmful, or completely inaccurate answers. There are two main ways this happens: In 2024, attackers uploaded 100 tampered AI models to the open-source platform Hugging Face. These poisoned models looked like helpful tools, but when people used them, they could spread false information or output malicious code. A large-scale example came from a Russian propaganda group called Pravda, which published more than 3.6 million fake articles online. These articles were designed to trick AI chatbots into repeating their messages. In tests, researchers found that major AI systems echoed these false claims about 33% of the time. AI-powered cybercrime blends realism, speed, and scale. These scams are not just harder to detect. They are also easier to launch. Here's how to stay protected: 1) Avoid entering sensitive data into public AI tools: Never share passwords, personal details, or confidential business information in any AI chat, even if it seems private. These inputs can sometimes be logged or misused. 2) Use strong antivirus software: AI-generated phishing emails and malware can slip past outdated security tools. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices. 3) Turn on two-factor authentication (2FA): 2FA adds an extra layer of protection to your accounts, including AI platforms. It makes it much harder for attackers to break in using stolen passwords. 4) Be extra cautious with unexpected video calls or voice messages: If something feels off, even if the person seems familiar, verify before taking action. Deepfake audio and video can sound and look very real. 5) Use a personal data removal service: With AI-powered scams and deepfake attacks on the rise, criminals are increasingly relying on publicly available personal information to craft convincing impersonations or target victims with personalized phishing. By using a reputable personal data removal service, you can reduce your digital footprint on data broker sites and public databases. This makes it much harder for scammers to gather the details they need to convincingly mimic your identity or launch targeted AI-driven attacks. While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren't cheap - and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It's what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. 6) Consider identity theft protection: If your data is leaked through a scam, early detection is key. Identity protection services can monitor your information and alert you to suspicious activity. Identity Theft companies can monitor personal information like your Social Security Number (SSN), phone number, and email address, and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. See my tips and best picks on how to protect yourself from identity theft. 7) Regularly monitor your financial accounts: AI-generated phishing, malware, and account takeover attacks are now more sophisticated and widespread than ever, as highlighted in the 2025 AI Security Report. By frequently reviewing your bank and credit card statements for suspicious activity, you can catch unauthorized transactions early, often before major damage is done. Quick detection is crucial, especially since stolen credentials and financial information are now being traded and exploited at scale by cybercriminals using AI. 8) Use a secure password manager: Stolen AI accounts and credential stuffing attacks are a growing threat, with hackers using automated tools to break into accounts and sell access on the dark web. A secure password manager helps you create and store strong, unique passwords for every account, making it far more difficult for attackers to compromise your logins, even if some of your information is leaked or targeted by AI-driven attacks. Get more details about my best expert-reviewed Password Managers of 2025 here. 9) Keep your software updated: AI-generated malware and advanced phishing kits are designed to exploit vulnerabilities in outdated software. To stay ahead of these evolving threats, ensure all your devices, browsers, and applications are updated with the latest security patches. Regular updates close security gaps that AI-powered malware and cybercriminals are actively seeking to exploit. Cybercriminals are now using AI to power some of the most convincing and scalable attacks we've ever seen. From deepfake video calls and AI-generated phishing emails to stolen AI accounts and malware written by chatbots, these scams are becoming harder to detect and easier to launch. Attackers are even poisoning AI models with false information and creating fake tools that look legitimate but are designed to do harm. To stay safe, it's more important than ever to use strong antivirus protection, enable multi-factor authentication, and avoid sharing sensitive data with AI tools you do not fully trust. Have you noticed AI scams getting more convincing? Let us know your experience or questions by writing us at Your story could help someone else stay safe. For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Follow Kurt on his social channels Answers to the most asked CyberGuy questions: New from Kurt: Copyright 2025 All rights reserved.

Deluxe Corporation (DLX) Launches AI Chat-bot, DAX
Deluxe Corporation (DLX) Launches AI Chat-bot, DAX

Yahoo

time21-05-2025

  • Yahoo

Deluxe Corporation (DLX) Launches AI Chat-bot, DAX

Deluxe Corporation (NYSE:DLX) announced the launch of a new AI-powered assistant, DAX. The AI Assistant uses a blend of artificial intelligence with human expertise, all grounded in privacy, compliance, and responsible AI practices. Some of its features include Merchant Partner Chatbot, Customer Service Agent Assist, and AI-Powered Website Assistant. A customer interacting with a point-of-sale terminal, demonstrating the impact of targeted loyalty marketing. DAX has been launched to empower users to improve their daily performance. The SVP and Chief Technology and Digital Officer at Deluxe, Yogaraj 'Yogs' Jayaprakasam, spoke about the strategic importance of DAX to Deluxe: 'DAX is more than a chatbot, it's a building block for how we scale AI to enhance service and simplify we're investing in practical tools that deliver value to our customers while preserving the personal service Deluxe is known for.' Deluxe Corporation (NYSE:DLX) has more than a century of experience providing its customers technology-enabled solutions for merchant services, payments, data solutions, and print. It has had a long-standing relationship with banks, specifically from printing checks. DLX leverages this network to build its new era of digitalization, such as payment platforms and data-driven insights. was recently recognized with a 2025 CIO 100 award, reflecting the enterprise-wide approach to innovation at Deluxe. While we acknowledge the potential of DLX to grow, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an AI stock that is more promising than DLX and that has 100x upside potential, check out our report about this cheapest AI stock. READ NEXT: and Disclosure: None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store