Latest news with #HumanRiskManagement

Zawya
6 days ago
- Business
- Zawya
From perception to protection: What Africa's Chief Information Security Officers (CISOs) don't know about employees could cost them
Cybersecurity in Africa is entering a new phase. As organisations mature their defences and invest in security awareness training (SAT), a difficult-to-spot, but critical gap is emerging – not between tools and cyber threats, but between what leaders believe about their employees, and what they actually experience. The KnowBe4 Africa Human Risk Management Report 2025 ( provides a glimpse into this mismatch. The results show that many leaders are overestimating their employees' preparedness, and underestimating the gaps in trust, training, and action. Says Anna Collard, SVP of Content Strategy and Evangelist at KnowBe4 Africa, 'It's not just that awareness alone isn't enough – it's that the level of employee's awareness is being misunderstood by the organisational leaders responsible for it..' The perception gap is growing, but measurable While 50% of decision-makers in 2025 rate employee cyber threat-reporting confidence at 4 out of 5, in 2024, only 43% of employees said that they felt confident recognising a threat, while one-third disagreed that their training was sufficient. 68% of decision-makers believe that SAT within their organisations is tailored by role. But only 33% of employees in 2024 felt that to be true – with 16% actively disagreeing. The implications are serious, because a workforce that appears trained and aware on paper may in fact be uncertain, unsupported, and vulnerable. 'This discrepancy between perception and experience is exactly where human risk thrives,' says Collard. 'If leaders don't correct course, they're building security strategies on false confidence.' Why measuring awareness is no longer enough One of the most frequently cited challenges in the report is deceptively simple: measuring if SAT works. More than four in ten respondents said that they struggle to track whether their security awareness programmes translate into safer behaviours. A key contributing factor, identified in the report, is that many organisations still rely on one-size-fits-all SAT, often delivered only annually or biannually, without role-specific customisation or behavioural feedback loops. While 68% say they offer role-based training, this claim is undermined by the fact that 'lack of role alignment' remains one of the top challenges respondents report. The discrepancy is clearest in sectors like manufacturing and healthcare, where generic SAT is most common. Size, it seems, also matters. Larger organisations are consistently less confident in employee readiness, train less frequently, and struggle more to measure outcomes.. Collard says: 'Awareness without action is like an alarm that no one responds to. Organisations are investing in security awareness training, but without the structure, tailoring, and follow-through to translate that into secure behaviour.' Beyond BYOD: The new blind spot is AI One of the most urgent themes to emerge is the rapid rise of 'shadow AI' use. With nearly half of all organisations still busy developing formal AI policies, yet up to 80% of employees using personal devices for work, the risk of unmonitored, unsanctioned AI usage is rising fast. East Africa is leading the way with more proactive AI governance, while Southern Africa, despite topping training frequency, lags behind on AI policy implementation. 'Technology has moved faster than policy,' Collard explains. 'And unless AI tools are properly governed, they become as much a risk vector as they are an asset.' The road ahead: Action, alongside awareness This report outlines five imperatives for African organisations: Customise SAT by role and risk exposure. Track what matters – not just participation, but behavioural outcomes. Formalise reporting structures employees trust and understand. Close the AI policy gap before misuse becomes systemic. Contextualise strategies based on region and sector – because resilience is not one-size-fits-all. 'The human element is often spoken about, but rarely measured in ways that lead to action that acknowledges context. Our goal is to help organisations stop guessing and start structuring their defences around real, contextual insights,'says Collard. 'This is a moment to move from compliance-driven box-ticking to culture-driven resilience. We have the data. Now we need the will. The full report is now available for download here: Distributed by APO Group on behalf of KnowBe4. Contact details: KnowBe4: Anne Dolinschek anned@ Red Ribbon: TJ Coenraad tayla@


Forbes
10-07-2025
- Business
- Forbes
How Agentic AI Can Transform HRM From Reactive To Proactive
Uzair Ahmed is an entrepreneur and startup enthusiast currently serving as the Co-Founder and CTO of Right-Hand Cybersecurity. Cybersecurity threats are evolving faster than traditional defense practices can adapt. As a result, the security awareness space is reaching a breaking point. Compliance-specific annual training, static phishing simulations and content-heavy platforms are no longer enough to address the behavior-driven nature of today's threats. Many in the human risk management (HRM) space believe the future lies in agentic AI—systems that don't just analyze but act autonomously. This marks a shift from passive education to intelligent, real-time defense coaching. However, while agentic AI offers promise, it must be implemented with care and strategy. What Is Agentic AI? Agentic AI refers to systems that operate with a degree of autonomy—capable of perceiving, deciding and acting in pursuit of a defined goal. In HRM, that goal is to reduce human-induced cybersecurity risk by monitoring and learning from individual behaviors and contextual data in real time. These agents don't wait for a human to assign training. They detect risk signals, interpret user intent and take timely action—coaching employees, flagging anomalies or triggering interventions automatically. Why Traditional Approaches Fall Short Conventional security awareness solutions often rely on pre-scheduled training regardless of user behavior, static phishing templates recycled across the company and a belief that more content equals better preparedness. But today's risks are not static—they're behavioral, contextual and moment-driven. A user who passed last month's phishing test might still fall for a novel attack today. A developer with elevated privileges might become risky after installing unknown packages or working at odd hours. Situational awareness and adaptability are now essential, and agentic AI has the potential to meet this demand. Challenges Of Deploying Agentic AI In HRM Despite its potential, agentic AI is not a plug-and-play solution. One of the most significant challenges is building trust—both with leadership and employees. Concerns about privacy, autonomy and errors can stall adoption. A poorly timed or incorrect AI action can damage credibility and reduce engagement. Data integration is another barrier. Agentic AI relies on inputs from identity systems, communications tools, developer environments and more. Many organizations operate with fragmented data, making it hard for AI to form accurate insights. To overcome these challenges, organizations should: • Start with assistive mode. Let AI suggest actions before granting autonomy. • Ensure transparency. Make decisions explainable to admins and, where possible, to users. • Design for context. Adapt interventions to user roles and activity history to avoid false positives. • Use human-in-the-loop models. For sensitive actions, combine AI guidance with human approval. These strategies lay the foundation for trust and operational success. How Agentic AI Can Transform Human Risk Management When these steps are reflected in an organization's strategy, agentic AI can help reimagine cyber awareness and HRM across four key dimensions: AI agents continuously analyze behavioral signals from systems like email, browsers, simulated exercises, training assignments and developer tools. This allows them to identify anomalies and evolving threats beyond what manual reviews or periodic training can detect. Instead of generic e-learning, agents deliver micro-interventions tailored to the user's context, behavior and role. A salesperson clicking suspicious links might receive a quick deepfake vishing call of a real threat scenario. A developer committing secrets to GitHub might get an immediate Slack nudge with secure coding tips. Agents can initiate nudges, recommend remediation or enroll users into adaptive coaching paths without needing admin intervention—saving time and scaling response. With every interaction, agents can learn what works—refining nudges, timing, content and delivery channels to increase engagement and behavior change. It adapts with every user response. From Awareness To Action: Why This Matters The goal of human risk management has always been to reduce human cyber risk, but awareness alone isn't enough. Behavior is what matters. The strategic use of agentic AI can help bridge the gap between awareness and action by reducing response time from detection to intervention, scaling personalized experiences across thousands of employees and ensuring interventions are timely, relevant and effective. This elevates HRM from being a compliance tool to a behavior-change engine. Final Thought: Agentic AI Isn't A Silver Bullet—It's A Catalyst Cybersecurity is entering an era where systems don't just alert—they act. Agentic AI introduces new ways to guide users, reduce risk and scale defensive behavior across the organization. However, these benefits only emerge when agentic AI is deployed strategically. Success depends on: • Clear goals and boundaries • Transparent communication • Ethical considerations and user trust • Strong integration with organizational systems The organizations that thrive in this new era will be those that use agentic AI not as a magic solution, but as a catalyst within a thoughtful, holistic human risk strategy—one that evolves alongside the threats it aims to defeat. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


CNN
18-05-2025
- CNN
CNN correspondent walks through aftermath of deadly tornado
Deepfake detectors fooled by expert With AI technology creating more and more realistic deepfakes, detectors are not up to the challenge of realizing what is real and what is fake, according to an industry expert. CNN's Isabel Rosales looks at how this technology can be bypassed and what you can do to protect yourself. An earlier version of this video gave the incorrect title for Perry Carpenter. He is the Chief Human Risk Management Strategist at KnowBe4.


CNN
18-05-2025
- CNN
Rare dust storm blankets Chicago
Deepfake detectors fooled by expert With AI technology creating more and more realistic deepfakes, detectors are not up to the challenge of realizing what is real and what is fake, according to an industry expert. CNN's Isabel Rosales looks at how this technology can be bypassed and what you can do to protect yourself. An earlier version of this video gave the incorrect title for Perry Carpenter. He is the Chief Human Risk Management Strategist at KnowBe4.


CNN
17-05-2025
- CNN
CNN correspondent walks through aftermath of deadly tornado
Deepfake detectors fooled by expert With AI technology creating more and more realistic deepfakes, detectors are not up to the challenge of realizing what is real and what is fake, according to an industry expert. CNN's Isabel Rosales looks at how this technology can be bypassed and what you can do to protect yourself. An earlier version of this video gave the incorrect title for Perry Carpenter. He is the Chief Human Risk Management Strategist at KnowBe4.