Latest news with #MichaelDeCesare


Forbes
11-08-2025
- Forbes
How 2FA Must Evolve For A Future Powered By AI Agents
Michael DeCesare is the President at Abnormal AI a leader in AI-native human behavior security. As AI agents become more capable and context-aware, they're starting to take on tasks that were once strictly human driven. AI is no longer merely a tool to assist users; it's increasingly operating with limited autonomy on our behalf—triaging security incidents, drafting reports, initiating support workflows or even responding to predefined threats in real time. While most implementations still involve human oversight or decision gates, trajectory toward more independent operation is clear. This evolving capability is powerful, but it also raises important questions. Chief among them is how do we preserve trust and integrity in these systems—particularly when it comes to authentication. Two-factor authentication (2FA) has become table stakes for modern cybersecurity, widely recognized as a baseline defense against account compromise. The premise is simple: Even if an attacker obtains your username and password, they still need a separate piece of evidence—like a code from your phone, a push notification approval or a biometric scan—to gain access. But what happens when AI agents are the ones logging in, performing transactions or taking remediation steps that traditionally required a human in the loop? How do you add an extra security layer for these nonhuman identities? When AI Becomes The User Consider a scenario where an AI agent is tasked with investigating suspicious email activity. It may need to access inboxes, quarantine messages or disable accounts to stop an attack in progress. In the past, each of these steps would require a human analyst to authenticate, approve the action and confirm it through a second factor. In newer architectures, an agent might be delegated limited authority to act—within tightly scoped workflows and often under monitoring—without human intervention at each step. This shift challenges our assumptions about how trust is established. If an AI accesses systems and takes action under delegated credentials, how do we ensure it doesn't exceed its intended scope or become an unwitting accomplice to a compromised process? After all, if an attacker hijacks the agent's credentials or manipulates its input data, the agent could take harmful actions faster and more persistently than any human user. These scenarios highlight why traditional, static approaches to 2FA won't be enough. We can no longer rely solely on point-in-time authentication or assume that an initial verification is sufficient for an agent operating across systems over long time horizons. The increasing reliance on machine identities—whether for task automation or AI-assisted decision-making—demands a rethink of how trust and access are managed. Evolving 2FA For Autonomous Systems To build trust in machine-driven workflows, organizations need to rethink how authentication is applied—not as a static checkpoint, but as a dynamic, ongoing process. That starts with limiting the scope and duration of what AI agents are allowed to do. Just as you wouldn't grant an employee unrestricted access to every system, AI agents should be given only the permissions necessary to complete specific tasks, and those permissions should expire automatically. Ephemeral credentials reduce long-term exposure and limit the blast radius if something goes wrong. Equally important is enforcing least-privilege access by default. If an AI agent's role is to quarantine suspicious emails, it shouldn't also have the ability to reset passwords or disable user accounts. Tightly constrained privileges ensure that even if an agent is compromised or manipulated, its ability to cause harm is minimal. Because AI agents can operate continuously, they also require continuous validation. Organizations should implement real-time monitoring to detect anomalies—whether that's access from an unusual location, actions at abnormal volumes or interactions beyond the agent's defined scope. These signals can trigger stepped-up authentication, alert administrators or revoke access automatically. Transparency and auditability are just as critical. Every action an AI agent takes should be logged, along with the context behind it—what triggered the action, which systems were involved and under whose authority it was executed. These records support both compliance and accountability if an agent behaves unexpectedly. Finally, 2FA itself must become more adaptive. Rather than a one-time check at login, authentication will need to evolve into a continuous process—where AI helps assess context and adjusts access dynamically. In high-risk situations, the system might ask for an extra factor of authentication, notify a human or block the action altogether. By adopting this more flexible model, organizations can retain the strengths of 2FA while adapting it to a world where software agents operate across environments and around the clock. The goal is not to remove human oversight, but to ensure AI operates securely, transparently and within clearly defined limits. A New Foundation For Trust In The AI Era As organizations build next-generation workflows, 2FA must evolve in both purpose and design. It's not going away—it's becoming more contextual and intelligent. Rather than simply verifying that a human is who they claim to be, it will increasingly verify that an AI agent is operating within the scope of its assigned role—and that its actions remain auditable and attributable to a responsible human owner. In a landscape where attackers and defenders are both using AI, trust will depend on robust controls, clear accountability and adaptive security layers. Getting 2FA right in this new context is a foundational step in ensuring that trust remains intact. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Business Wire
10-06-2025
- Business
- Business Wire
Abnormal AI Named to CNBC Disruptor 50 List for Second Consecutive Year, Showcasing Continued Innovation and Leadership in AI-Powered Cybersecurity
LAS VEGAS--(BUSINESS WIRE)-- Abnormal AI, the leader in AI-native human behavior security, has been named to the prestigious CNBC Disruptor 50 —a list spotlighting the most innovative and forward-thinking private companies transforming the economy. In its second consecutive year, Abnormal moved up to No. 25 on the list, reflecting the company's sustained momentum, scalability, and impact across the AI and cybersecurity landscape. Abnormal AI was selected for its significant business growth and transformative product ecosystem. The annual list is curated by CNBC's editorial team with input from data partners PitchBook and IBISWorld, and the Disruptor 50 Advisory Council, composed of experts in innovation and entrepreneurship who evaluate companies based on a blend of quantitative and qualitative insights. This award acknowledges how Abnormal is leveraging bold ideas, cutting-edge technologies, and scalable models to challenge incumbents and drive real-world impact. 'It's an incredible honor to be recognized as a CNBC Disruptor once again,' said Evan Reiser, CEO and founder of Abnormal AI. 'Earning this accolade for the second consecutive year validates our relentless focus on innovation, our commitment to our customers, and our continued creation of breakthrough technology that's reshaping the future of cybersecurity through the power of behavioral AI.' Since its founding in 2018, Abnormal AI has emerged as a category leader, protecting over 3,200 organizations, including more than 20% of the Fortune 500, with unparalleled speed and accuracy. Its behavioral AI platform has mitigated over $10 billion in annual risk, with adoption accelerating globally. The company has continued its strong trajectory by continuing to hit landmark milestones—launching breakthrough AI agents that reimagine security awareness training, achieving FedRAMP authorization in just 256 days, and announcing plans to expand into new countries across Europe and Asia. In addition to this back-to-back CNBC Disruptor 50 recognition, Abnormal has also been honored with several other accolades in recent months. A few of these distinctions include placement Fortune's Most Innovative Companies of 2025, making the CRN AI 100 as a top 20 hottest AI cybersecurity company for the second consecutive year, winning two 2025 Cyber Defense Magazine Global Infosec Awards for Cutting Edge Cybersecurity AI and Pioneering Email Security and Management, as well as securing spots on the Rising in Cyber 2025, SC Awards Europe (for Best Email Security Solution and Best Behavior Analytics/ Enterprise Threat Detection), and 2025 InfraRed 100. Michael DeCesare, president at Abnormal AI added, 'This recognition on the CNBC Disruptor 50 reinforces the traction that we are seeing in the market as appetite grows for AI-native solutions. Our go-to-market strategy is accelerating alongside this rising demand, especially as organizations across industries face escalating threats—including those powered by AI. We're turning the tables: using good AI to fight malicious AI.' For the full CNBC Disruptor 50 list, visit here. About Abnormal AI Abnormal AI is the leading AI-native human behavior security platform, leveraging machine learning to stop sophisticated inbound attacks and detect compromised accounts across email and connected applications. The anomaly detection engine leverages identity and context to understand human behavior and analyze the risk of every cloud email event—detecting and stopping sophisticated, socially-engineered attacks that target the human vulnerability. You can deploy Abnormal in minutes with an API integration for Microsoft 365 or Google Workspace and experience the full value of the platform instantly. Additional protection is available for Slack, Workday, ServiceNow, Zoom, and multiple other cloud applications. Abnormal is currently trusted by more than 3,200 organizations, including over 20% of the Fortune 500, as it continues to redefine how cybersecurity works in the age of AI. Learn more at