
25. Abnormal AI
Abnormal AI uses artificial intelligence to help big companies counter an old but dangerous problem: socially engineered attacks delivered via email, exploiting human vulnerability. These remain one of the largest cybersecurity threats.
Unlike traditional cybersecurity that identifies markers of compromise, Abnormal Security uses artificial intelligence to create baselines for users' normal behavior, then uses the baseline to filter out malicious activity. Examples are emails sent at the wrong time of the day, with words that aren't right for that user, or an uncommon contact reaching out in a suspicious manner. Humans backstop the AI filter and respond to threats. The company can also detect attacks that bypass multi-factor authentication and guard against account takeovers.
In 2024, Abnormal hit several key milestones: 100% year over year growth, $200 million in annual recurring revenue, and over 3,000 customers across 35 countries.
The company has been expanding across Europe and to Japan, and has grown its headcount by 70% to more than 1,000.
Abnormal has continued to roll out new products, including an AI system that works across cloud applications, including email and collaboration apps, and a security mailbox that automates user-reporting phishing attacks.
This year, the company launched autonomous AI agents that offer employees personalized training replacing generic cybersecurity compliance modules, using actual attacks from the past to create simulations that train users.
With systems designed for companies with thousands of employees, Abnormal AI says more than 20% of the Fortune 500 use its services.
In April, the company secured a key authorization from the federal government to compete for contracts in the public sector.
Last August, Abnormal announced a $250 million Series D funding round led by Wellington Management. Other investors include CrowdStrike Falcon Fund and Greylock Partners, where CEO and serial entrepreneur Evan Reiser incubated the company.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
11-08-2025
- Forbes
How 2FA Must Evolve For A Future Powered By AI Agents
Michael DeCesare is the President at Abnormal AI a leader in AI-native human behavior security. As AI agents become more capable and context-aware, they're starting to take on tasks that were once strictly human driven. AI is no longer merely a tool to assist users; it's increasingly operating with limited autonomy on our behalf—triaging security incidents, drafting reports, initiating support workflows or even responding to predefined threats in real time. While most implementations still involve human oversight or decision gates, trajectory toward more independent operation is clear. This evolving capability is powerful, but it also raises important questions. Chief among them is how do we preserve trust and integrity in these systems—particularly when it comes to authentication. Two-factor authentication (2FA) has become table stakes for modern cybersecurity, widely recognized as a baseline defense against account compromise. The premise is simple: Even if an attacker obtains your username and password, they still need a separate piece of evidence—like a code from your phone, a push notification approval or a biometric scan—to gain access. But what happens when AI agents are the ones logging in, performing transactions or taking remediation steps that traditionally required a human in the loop? How do you add an extra security layer for these nonhuman identities? When AI Becomes The User Consider a scenario where an AI agent is tasked with investigating suspicious email activity. It may need to access inboxes, quarantine messages or disable accounts to stop an attack in progress. In the past, each of these steps would require a human analyst to authenticate, approve the action and confirm it through a second factor. In newer architectures, an agent might be delegated limited authority to act—within tightly scoped workflows and often under monitoring—without human intervention at each step. This shift challenges our assumptions about how trust is established. If an AI accesses systems and takes action under delegated credentials, how do we ensure it doesn't exceed its intended scope or become an unwitting accomplice to a compromised process? After all, if an attacker hijacks the agent's credentials or manipulates its input data, the agent could take harmful actions faster and more persistently than any human user. These scenarios highlight why traditional, static approaches to 2FA won't be enough. We can no longer rely solely on point-in-time authentication or assume that an initial verification is sufficient for an agent operating across systems over long time horizons. The increasing reliance on machine identities—whether for task automation or AI-assisted decision-making—demands a rethink of how trust and access are managed. Evolving 2FA For Autonomous Systems To build trust in machine-driven workflows, organizations need to rethink how authentication is applied—not as a static checkpoint, but as a dynamic, ongoing process. That starts with limiting the scope and duration of what AI agents are allowed to do. Just as you wouldn't grant an employee unrestricted access to every system, AI agents should be given only the permissions necessary to complete specific tasks, and those permissions should expire automatically. Ephemeral credentials reduce long-term exposure and limit the blast radius if something goes wrong. Equally important is enforcing least-privilege access by default. If an AI agent's role is to quarantine suspicious emails, it shouldn't also have the ability to reset passwords or disable user accounts. Tightly constrained privileges ensure that even if an agent is compromised or manipulated, its ability to cause harm is minimal. Because AI agents can operate continuously, they also require continuous validation. Organizations should implement real-time monitoring to detect anomalies—whether that's access from an unusual location, actions at abnormal volumes or interactions beyond the agent's defined scope. These signals can trigger stepped-up authentication, alert administrators or revoke access automatically. Transparency and auditability are just as critical. Every action an AI agent takes should be logged, along with the context behind it—what triggered the action, which systems were involved and under whose authority it was executed. These records support both compliance and accountability if an agent behaves unexpectedly. Finally, 2FA itself must become more adaptive. Rather than a one-time check at login, authentication will need to evolve into a continuous process—where AI helps assess context and adjusts access dynamically. In high-risk situations, the system might ask for an extra factor of authentication, notify a human or block the action altogether. By adopting this more flexible model, organizations can retain the strengths of 2FA while adapting it to a world where software agents operate across environments and around the clock. The goal is not to remove human oversight, but to ensure AI operates securely, transparently and within clearly defined limits. A New Foundation For Trust In The AI Era As organizations build next-generation workflows, 2FA must evolve in both purpose and design. It's not going away—it's becoming more contextual and intelligent. Rather than simply verifying that a human is who they claim to be, it will increasingly verify that an AI agent is operating within the scope of its assigned role—and that its actions remain auditable and attributable to a responsible human owner. In a landscape where attackers and defenders are both using AI, trust will depend on robust controls, clear accountability and adaptive security layers. Getting 2FA right in this new context is a foundational step in ensuring that trust remains intact. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Business Wire
05-08-2025
- Business Wire
Abnormal AI Launches Continuous Security Posture Management to Safeguard Microsoft 365 Environments
LAS VEGAS--(BUSINESS WIRE)-- Abnormal AI, the leader in AI-native human behavior security, today announced its updated Security Posture Management product, bringing AI-driven protection, automated prioritization, and remediation guidance to customers' Microsoft 365 environments. As Microsoft 365 environments become more complex, accidental misconfigurations are now a leading cause of cloud email vulnerabilities. The growing number of applications, layered settings, and fragmented ownership create blind spots and accidental openings that threat actors like Midnight Blizzard have exploited in the past. With deep Microsoft 365 integration and a proven ability to stop advanced email threats, Abnormal is ideally positioned to uncover these configuration risks. The new Security Posture Management add-on continuously detects misconfigurations across users, apps, and tenants, giving security teams the visibility and control they need to stay ahead of attackers. 'Thousands of organizations rely on Abnormal to stop email-based attacks like phishing and account compromise. But attackers are also exploiting misconfigurations to bypass phishing defenses,' said Evan Reiser, CEO of Abnormal AI. 'Because we already integrate deeply with Microsoft 365 to protect inbound email, we can extend our API-based architecture to detect these hidden risks. Security Posture Management gives security teams continuous visibility into misconfiguration risks across their entire Microsoft 365 environment.' Key capabilities include: Comprehensive Visibility: Continuously uncovers risky Microsoft 365 misconfigurations using CIS benchmarks and Abnormal threat intelligence. Automated Prioritization: Surfaces the most dangerous risks first by factoring in impact, prevalence, and environment. Remediation Guidance: Provides clear, actionable fixes with no manual audits or scripting. Additional Resources: Visit Abnormal at Black Hat 2025: Abnormal will be showcasing new Security Posture Management capabilities throughout the week at the CyBRR Cafe, located in front of the Expo Hall at Mandalay Bay. Demos are available upon request. Discover More: Learn more about this product release in this blog post from CEO Evan Reiser. About Abnormal AI: Abnormal AI is the leading AI-native human behavior security platform, leveraging machine learning to stop sophisticated inbound attacks and detect compromised accounts across email and connected applications. The anomaly detection engine leverages contextual signals to analyze the risk of every cloud email event—detecting and blocking sophisticated, socially-engineered attacks that target human vulnerability. Abnormal is designed to be deployed in minutes via an API integration with Microsoft 365 or Google Workspace, unlocking the full value of the platform instantly. Additional protection is available for Slack, Workday, ServiceNow, Zoom, and multiple other cloud applications. Abnormal is currently trusted by more than 3,200 organizations, including 25% of the Fortune 500, as it continues to redefine how cybersecurity works in the age of AI. Learn more at


Business Wire
22-07-2025
- Business Wire
New Report from Abnormal AI Shows Universal Alignment on AI as the Future of the SOC
LAS VEGAS--(BUSINESS WIRE)-- Abnormal AI, the leader in AI-native human behavior security, today released a new research report, Human-Centered AI: Redefining the Modern SOC, revealing a rare consensus among cybersecurity leaders and frontline analysts: AI is no longer optional—it's becoming the foundation of the modern security operations center (SOC). Based on a survey of nearly 500 security leaders and SOC analysts across the United States and United Kingdom, the report uncovers widespread alignment on both the urgency and optimism surrounding AI adoption. Rather than fearing disruption, most see AI as a critical partner in scaling defenses while keeping people empowered and engaged. Several key findings underscore this shift: 96% of leaders say they have no plans to reduce headcount as AI adoption accelerates. Instead, they are reallocating talent to higher-value work such as threat hunting, proactive security initiatives, and analyst mentorship. 75% of analysts report that AI tools are already improving their job satisfaction by reducing alert fatigue and automating repetitive triage. 63% of analysts say AI is improving the accuracy of investigations, rising to 69% among daily AI users. Over the next 3–5 years, both leaders and analysts expect autonomous SOC operations to become the norm, as AI matures from supportive automation to intelligent collaboration. 'The findings show that the old narrative of AI replacing security professionals is falling away,' said Mick Leach, Field CISO at Abnormal AI. 'Today's leaders and analysts universally see AI as a force multiplier that empowers teams to do their best work—more accurately, more efficiently, and with greater satisfaction.' While cost savings and operational efficiencies have long been the primary drivers behind AI adoption, the study reveals that AI's benefits extend far beyond these traditional objectives. As AI takes over time-consuming, repetitive tasks, security teams are increasingly able to redirect their focus toward proactive defense, deeper investigations, and more strategic initiatives. Analysts themselves are among the most optimistic: those using AI daily report higher job satisfaction and greater confidence in their SOC's overall effectiveness. 'This is the first time we've seen such universal alignment between CISOs and frontline analysts about where AI fits,' continued Leach. 'The consensus is clear: human-centered AI isn't just inevitable—it's foundational to the future of security. Getting there requires redefining the roles of human analysts alongside AI and shifting resources toward proactive, risk-based operations. Those that succeed will be the ones who embrace AI not just as a tool, but as a strategic partner in both technology and talent evolution.' Download the full report to explore the complete findings. About Abnormal AI Abnormal AI is the leading AI-native human behavior security platform, leveraging machine learning to stop sophisticated inbound attacks and detect compromised accounts across email and connected applications. The anomaly detection engine leverages identity and context to understand human behavior and analyze the risk of every cloud email event—detecting and stopping sophisticated, socially-engineered attacks that target the human vulnerability. You can deploy Abnormal in minutes with an API integration for Microsoft 365 or Google Workspace and experience the full value of the platform instantly. Additional protection is available for Slack, Workday, ServiceNow, Zoom, and multiple other cloud applications. Abnormal is currently trusted by more than 3,200 organizations, including over 20% of the Fortune 500, as it continues to redefine how cybersecurity works in the age of AI. Learn more at