logo
#

Latest news with #KnowBe4Africa

Generative AI Tools Expose Corporate Secrets Through User Prompts
Generative AI Tools Expose Corporate Secrets Through User Prompts

Arabian Post

time4 days ago

  • Business
  • Arabian Post

Generative AI Tools Expose Corporate Secrets Through User Prompts

A significant portion of employee interactions with generative AI tools is inadvertently leaking sensitive corporate data, posing serious security and compliance risks for organisations worldwide. A comprehensive analysis by Harmonic Security, involving tens of thousands of prompts submitted to platforms such as ChatGPT, Copilot, Claude, Gemini, and Perplexity, revealed that 8.5% of these interactions contained sensitive information. Notably, 45.77% of the compromised data pertained to customer information, including billing details and authentication credentials. Employee-related data, such as payroll records and personal identifiers, constituted 26.68%, while legal and financial documents accounted for 14.95%. Security-related information, including access keys and internal protocols, made up 6.88%, and proprietary source code comprised 5.64% of the sensitive data identified. The prevalence of free-tier usage among employees exacerbates the risk. In 2024, 63.8% of ChatGPT users operated on the free tier, with 53.5% of sensitive prompts entered through these accounts. Similar patterns were observed across other platforms, with 58.62% of Gemini users, 75% of Claude users, and 50.48% of Perplexity users utilizing free versions. These free tiers often lack robust security features, increasing the likelihood of data exposure. ADVERTISEMENT Anna Collard, Senior Vice President of Content Strategy & Evangelist at KnowBe4 Africa, highlighted the unintentional nature of these data leaks. She noted that users often underestimate the sensitivity of the information they input into AI platforms, leading to inadvertent disclosures. Collard emphasized that the casual and conversational nature of generative AI tools can lower users' guards, resulting in the sharing of confidential information that, when aggregated, can be exploited by malicious actors for targeted attacks. The issue is compounded by the lack of comprehensive governance policies within organizations. A study by Dimensional Research and SailPoint found that while 96% of IT professionals acknowledge the security threats posed by autonomous AI agents, only 54% have full visibility into AI agent activities, and a mere 44% have established governance policies. Furthermore, 23% of IT professionals reported instances where AI agents were manipulated into revealing access credentials, and 80% observed unintended actions by these agents, such as accessing unauthorized systems or sharing inappropriate data. The rapid adoption of generative AI tools, driven by their potential to enhance productivity and innovation, has outpaced the development of adequate security measures. Organizations are now grappling with the challenge of balancing the benefits of AI integration with the imperative to protect sensitive data. Experts advocate for the implementation of stringent oversight mechanisms, including robust access controls and comprehensive user education programs, to mitigate the risks associated with generative AI usage.

Why Empowered People Are the Real Cyber Superpower
Why Empowered People Are the Real Cyber Superpower

Zawya

time05-05-2025

  • Business
  • Zawya

Why Empowered People Are the Real Cyber Superpower

It's time to retire the tired narrative that employees are the 'weakest link' in cybersecurity. They're not. They're simply the most frequently targeted. And that makes sense – if you're a cybercriminal, why brute-force your way into secure systems when you can just trick a human? And that is why over-relying on technical controls only goes wrong. So is treating users like liabilities to be controlled, rather than assets to be empowered. One of the core principles of Human Risk Management (HRM) is not about shifting blame, but about enabling better decisions at every level. It's a layered, pragmatic strategy that combines technology, culture, and behaviour design to reduce human cyber risk in a sustainable way. And it recognises this critical truth: your people can be your greatest defence – if you equip them well. The essence of HRM is empowering individuals to make better risk decisions, but it's even more than that. 'With the right combination of tools, culture and security practices, employees become an extension of your security programme, rather than just an increased attack surface,' asserts Anna Collard, SVP Content Strategy&Evangelist at KnowBe4 Africa. A recent IBM study revealed that more than 90% of all cybersecurity breaches can be traced back to human error ( due to employees being successfully exploited through phishing scams, their use of weak passwords or non-optimal handling of sensitive data. Companies have long seen the upward trend in this threat, thanks to numerous studies, and subsequently employees are often judged to be the biggest risk companies need to manage. This perspective, though, is denying businesses the opportunity to develop the best defence they could have: empowered, proactive employees at the frontline; not behind it. Shield users – but also train them through exposure Of course, the first thing companies should do is protect and shield employees from real threats. Prevention and detection technologies – email gateway filters, endpoint protection, AI-driven analysis – are essential to keeping malicious content from ever reaching user's inboxes or devices. But here's the catch: if users are never exposed to threats, they don't build the muscle to recognise them when they do get through. Enter the prevalence effect – a cognitive bias which shows that the less frequently someone sees a threat (like a phishing email), the less likely they are to spot it when it finally appears. It's a fascinating and slightly counterintuitive insight: in trying to protect users too much, we may be making them more vulnerable. That's why simulated phishing campaigns and realistic training scenarios are so critical. They provide safe, controlled exposure to common attack tactics – so people can develop the reflexes, pattern recognition, and critical thinking needed to respond wisely in real situations. Many of today's threats don't just rely on tech vulnerabilities – they exploit human attention. Attackers leverage stress, urgency, and distraction to bypass logic and trigger impulsive actions. Whether it's phishing, smishing, deepfakes, or voice impersonation scams, the aim is the same: manipulate humans to bypass scrutiny. That's why a foundational part of HRM is building what I call digital mindfulness – the ability to pause, observe, and evaluate before acting. This isn't abstract wellness talk; it's a practical skill that helps people notice deception tactics in real-time and stay in their system (critical thinking mode) instead of reacting on autopilot. Tools such as systems-based interventions, prompts, nudges or second chance reminders are ways to induce this friction to encourage pausing when and if it matters. 'Every day, employees face a growing wave of sophisticated, AI-powered attacks designed to exploit human vulnerabilities, not just technical ones. As attackers leverage automation, AI and social engineering at scale, traditional training just isn't effective enough.' Protection requires layered defence 'Just as businesses manage technical vulnerabilities, they need to manage human risk – through a blend of policy, technology, culture, ongoing education, and personalised interventions,' says Collard. This layered approach extends beyond traditional training. System-based interventions – such as smart prompts, real-time nudges, and in-the-moment coaching – can slow users down at critical decision points, helping them make safer choices. Personalised micro-learning, tailored to an individual's role, risk profile, and behavioural patterns, adds another important layer of defence. Crucially, Collard emphasises that zero trust shouldn't apply only to systems. 'We need to adopt the same principle with human behaviour,' she explains. 'Never assume awareness. Always verify understanding, and continuously reinforce it.' To make this concept more accessible, the acronym D.E.E.P., a framework for human-centric defence: Defend: Use technology and policy to block as many threats as possible before they reach the user. Educate: Deliver relevant, continuous training, simulations, and real-time coaching to build awareness and decision-making skills. Empower: Foster a culture where employees feel confident to report incidents without fear of blame or repercussions. Protect: Share threat intelligence transparently, and treat mistakes as learning opportunities, not grounds for shame. 'Fear-based security doesn't empower people,' she explains. 'It reinforces the idea that employees are weak points who need to be kept behind the frontline. But with the right support, they can be active defenders—and even your first line of defence.' Empowered users are part of your security fabric When people are trained, supported, and mentally prepared—not just lectured at once a year – they become a dynamic extension of your cybersecurity posture. They're not hiding behind the firewall; they are part of it. With attacks growing in scale and sophistication, it's not enough to rely on software alone. Businesses need a human layer that is just as adaptive, resilient, and alert. That means replacing blame culture with a learning culture. It means seeing people not as the problem, but as part of the solution. Because the truth is: the best defence isn't a perfect system. It's a well-prepared person who knows how to respond when something slips through. 'Human behaviour is beautifully complex,' Collard concludes. 'That's why a layered approach to HRM – integrating training, technology, processes and cognitive readiness – is essential. With the right support, employees can shift from being targets to becoming trusted defenders.' Distributed by APO Group on behalf of KnowBe4.

A New Era of Manipulation: How Deepfakes and Disinformation Threaten Business (By Anna Collard)
A New Era of Manipulation: How Deepfakes and Disinformation Threaten Business (By Anna Collard)

Zawya

time14-04-2025

  • Politics
  • Zawya

A New Era of Manipulation: How Deepfakes and Disinformation Threaten Business (By Anna Collard)

By Anna Collard, SVP Content Strategy&Evangelist, KnowBe4 Africa ( Last weekend, at a typical South African braai (barbeque), I found myself in a heated conversation with someone highly educated—yet passionately defending a piece of Russian propaganda that had already been widely debunked. It was unsettling. The conversation quickly became irrational, emotional, and very uncomfortable. That moment crystallised something for me: we're no longer just approaching an era where truth is under threat—we're already living in it. A reality where falsity feels familiar, and information is weaponised to polarize societies and manipulate our belief systems. And now, with the democratisation of AI tools like deepfakes, anyone with enough intent can impersonate authority, generate convincing narratives, and erode trust—at scale. The Evolution of Disinformation: From Election Interference to Enterprise Exploitation The 2024 KnowBe4 Political Disinformation in Africa Survey ( revealed a striking contradiction: while 84% of respondents use social media as their main news source, 80% admit that most fake news originates there. Despite this, 58% have never received any training on identifying misinformation​. This confidence gap echoes findings in the Africa Cybersecurity&Awareness 2025 Report, ( where 83% of respondents said they'd recognise a security threat if they saw one—yet 37% had fallen for fake news or disinformation, and 35% had lost money due to a scam. What's going wrong? It's not a lack of intelligence—it's psychology. The Psychology of Believing the Untrue Humans are not rational processors of information; we're emotional, biased, and wired to believe things that feel easy and familiar. Disinformation campaigns—whether political or criminal—exploit this. The Illusory Truth Effect: The easier something is to process, the more likely we are to believe it—even if it's false (Unkelbach et al., 2019). Fake content often uses bold headlines, simple language, and dramatic visuals that 'feel' true. The Mere Exposure Effect: The more often we see something, the more we tend to like or accept it—regardless of its accuracy (Zajonc, 1968). Repetition breeds believability. Confirmation Bias: We're more likely to believe and even share false information when it aligns with our values or beliefs. A recent example is the viral deepfake image of Hurricane Helena shared across social media. Despite fact-checkers clearly identifying it as fake, the post continued to spread ( Why? Because it resonated emotionally with users' felt frustration and emotional frame of mind. Deepfakes and State-Sponsored Deception According to the Africa Centre for Strategic Studies, disinformation campaigns on the continent have nearly quadrupled since 2022. Even more troubling: nearly 60% are state-sponsored, often aiming to destabilise democracies and economies. The rise of AI-assisted manipulation adds fuel to this fire. Deepfakes now allow anyone to fabricate video or audio that's nearly indistinguishable from the real thing. Why This Matters for Business This isn't just about national security or political manipulation —it's about corporate survival too. Today's attackers don't need to breach your firewall. They can trick your people. This has already led to corporate-level losses, like the Hong Kong finance employee tricked into transferring over $25 million during a fake video call with deepfaked 'executives.' These corporate disinformation or narrative based attack can also result in: Fake press releases can tank your stock. Deepfaked CEOs can authorise wire transfers. Viral falsehoods can ruin reputations before PR even logs in. The WEF's 2024 Global Risk Report named misinformation and disinformation as the top global risk, surpassing even climate and geopolitical instability. That's a red flag businesses cannot ignore. The convergence of state-sponsored disinformation, AI-enabled fraud, and employee overconfidence creates a perfect storm. Combating this new frontier of cyber risk requires more than just better firewalls. It demands informed minds, digital humility, and resilient cultures. Building Cognitive Resilience What can be done? While AI-empowered defenses can help improve detection capabilities, technology alone won't save us. Organisations must also build cognitive immunity—the ability for employees to discern, verify, and challenge what they see and hear. Adopt a Zero Trust Mindset—Everywhere Just as systems don't trust a device or user by default, people should treat information the same way, with a healthy dose of scepticism. Encourage employees to verify headlines, validate sources, and challenge urgency or emotional manipulation—even when it looks or sounds familiar. Introduce Digital Mindfulness Training Train employees to pause, reflect, and evaluate before they click, share, or respond. This awareness helps build cognitive resilience—especially against emotionally manipulative or repetitive content designed to bypass critical thinking. Educate on deepfakes, synthetic media, AI impersonation, and narrative manipulation. Build understanding of how human psychology is exploited—not just technology. Treat Disinformation Like a Threat Vector Monitor for fake press releases, viral social media posts, or impersonation attempts targeting your brand, leaders, or employees. Include reputational risk in your incident response plans. The battle against disinformation isn't just a technical one—it's psychological. In a world where anything can be faked, the ability to pause, think clearly, and question intelligently is a vital layer of security. Truth has become a moving target. In this new era, clarity is a skill that we need to hone. Distributed by APO Group on behalf of KnowBe4.

Artificial Intelligence (AI) and AI-agents: A Game-Changer for Both Cybersecurity and Cybercrime (By Anna Collard)
Artificial Intelligence (AI) and AI-agents: A Game-Changer for Both Cybersecurity and Cybercrime (By Anna Collard)

Zawya

time03-03-2025

  • Business
  • Zawya

Artificial Intelligence (AI) and AI-agents: A Game-Changer for Both Cybersecurity and Cybercrime (By Anna Collard)

By Anna Collard, SVP Content Strategy&Evangelist KnowBe4 Africa ( Artificial Intelligence is no longer just a tool—it is a gamechanger in our lives, our work as well as in both cybersecurity and cybercrime. While businesses leverage AI to enhance defences, cybercriminals are weaponising AI to make these attacks more scalable and convincing​. In 2025, researchers forecast that AI agents, or autonomous AI-driven systems capable of performing complex tasks with minimal human input, are revolutionising both cyberattacks and cybersecurity defences. While AI-powered chatbots have been around for a while, AI agents go beyond simple assistants, functioning as self-learning digital operatives that plan, execute, and adapt in real time. These advancements don't just enhance cybercriminal tactics—they may fundamentally change the cybersecurity battlefield. How Cybercriminals Are Weaponising AI: The New Threat Landscape AI is transforming cybercrime, making attacks more scalable, efficient, and accessible. The WEF Artificial Intelligence and Cybersecurity Report (2025) ( highlights how AI has democratised cyber threats, enabling attackers to automate social engineering, expand phishing campaigns, and develop AI-driven malware​. Similarly, the Orange Cyberdefense Security Navigator 2025 ( warns of AI-powered cyber extortion, deepfake fraud, and adversarial AI techniques. And the 2025 State of Malware Report by Malwarebytes ( notes, while GenAI has enhanced cybercrime efficiency, it hasn't yet introduced entirely new attack methods—attackers still rely on phishing, social engineering, and cyber extortion, now amplified by AI. However, this is set to change with the rise of AI agents—autonomous AI systems capable of planning, acting, and executing complex tasks—posing major implications for the future of cybercrime. Here is a list of common (ab)use cases of AI by cybercriminals: AI-Generated Phishing&Social Engineering Generative AI and large language models (LLMs) enable cybercriminals to craft more believable and sophisticated phishing emails in multiple languages—without the usual red flags like poor grammar or spelling mistakes. AI-driven spear phishing now allows criminals to personalise scams at scale, automatically adjusting messages based on a target's online activity. AI-powered Business Email Compromise (BEC) scams are increasing, as attackers use AI-generated phishing emails sent from compromised internal accounts to enhance credibility​. AI also automates the creation of fake phishing websites, watering hole attacks and chatbot scams, which are sold as AI-powered crimeware as a service' offerings, further lowering the barrier to entry for cybercrime​. Deepfake-Enhanced Fraud&Impersonation Deepfake audio and video scams are being used to impersonate business executives, co-workers or family members to manipulate victims into transferring money or revealing sensitive data. The most famous 2024 incident was UK based engineering firm Arup ( that lost $25 million after one of their Hong Kong based employees was tricked by deepfake executives in a video call. Attackers are also using deepfake voice technology to impersonate distressed relatives or executives, demanding urgent financial transactions. Cognitive Attacks Online manipulation—as defined by Susser et al. (2018) ( —is 'at its core, hidden influence — the covert subversion of another person's decision-making power'. AI-driven cognitive attacks are rapidly expanding the scope of online manipulation, leveraging digital platforms and state-sponsored actors increasingly use generative AI to craft hyper-realistic fake content, subtly shaping public perception while evading detection. These tactics are deployed to influence elections, spread disinformation, and erode trust in democratic institutions. Unlike conventional cyberattacks, cognitive attacks don't just compromise systems—they manipulate minds, subtly steering behaviours and beliefs over time without the target's awareness. The integration of AI into disinformation campaigns dramatically increases the scale and precision of these threats, making them harder to detect and counter. The Security Risks of LLM Adoption Beyond misuse by threat actors, business adoption of AI-chatbots and LLMs introduces their own significant security risks—especially when untested AI interfaces connect the open internet to critical backend systems or sensitive data. Poorly integrated AI systems can be exploited by adversaries and enable new attack vectors, including prompt injection, content evasion, and denial-of-service attacks. Multimodal AI expands these risks further, allowing hidden malicious commands in images or audio to manipulate outputs. Additionally, bias within LLMs poses another challenge, as these models learn from vast datasets that may contain skewed, outdated, or harmful biases. This can lead to misleading outputs, discriminatory decision-making, or security misjudgments, potentially exacerbating vulnerabilities rather than mitigating them. As LLM adoption grows, rigorous security testing, bias auditing, and risk assessment are essential to prevent exploitation and ensure trustworthy, unbiased AI-driven decision-making. When AI Goes Rogue: The Dangers of Autonomous Agents With AI systems now capable of self-replication, as demonstrated in a recent study ( the risk of uncontrolled AI propagation or rogue AI—AI systems that act against the interests of their creators, users, or humanity at large - is growing. Security and AI researchers have raised concerns that these rogue systems can arise either accidentally or maliciously, particularly when autonomous AI agents are granted access to data, APIs, and external integrations. The broader an AI's reach through integrations and automation, the greater the potential threat of it going rogue, making robust oversight, security measures, and ethical AI governance essential in mitigating these risks. The future of AI Agents for Automation in Cybercrime A more disruptive shift in cybercrime can and will come from AI Agents, which transform AI from a passive assistant into an autonomous actor capable of planning and executing complex attacks. Google, Amazon, Meta, Microsoft, and Salesforce are already developing Agentic AI for business use, but in the hands of cybercriminals, its implications are alarming. These AI agents can be used to autonomously scan for vulnerabilities, exploit security weaknesses, and execute cyberattacks at scale. They can also allow attackers to scrape massive amounts of personal data from social media platforms and automatically compose and send fake executive requests to employees or analyse divorce records across multiple countries to identify individuals for AI-driven romance scams, orchestrated by an AI agent. These AI-driven fraud tactics don't just scale attacks—they make them more personalised and harder to detect. Unlike current GenAI threats, Agentic AI has the potential to automate entire cybercrime operations, significantly amplifying the risk​. How Defenders Can Use AI&AI Agents Organisations cannot afford to remain passive in the face of AI-driven threats and security professionals need to remain abreast of the latest development. Here are some of the opportunities in using AI to defend against AI: AI-Powered Threat Detection and Response: Security teams can deploy AI and AI-agents to monitor networks in real time, identify anomalies, and respond to threats faster than human analysts can. AI-driven security platforms can automatically correlate vast amounts of data to detect subtle attack patterns that might otherwise go unnoticed, create dynamic threat modelling, real-time network behaviour analysis, and deep anomaly detection​. For example, as outlined by researchers of Orange Cyber Defense ( AI-assisted threat detection is crucial as attackers increasingly use "Living off the Land" (LOL) techniques that mimic normal user behaviour, making it harder for detection teams to separate real threats from benign activity. By analysing repetitive requests and unusual traffic patterns, AI-driven systems can quickly identify anomalies and trigger real-time alerts, allowing for faster defensive responses. However, despite the potential of AI-agents, human analysts still remain critical, as their intuition and adaptability are essential for recognising nuanced attack patterns and leverage real incident and organisational insights to prioritise resources effectively. Automated Phishing and Fraud Prevention: AI-powered email security solutions can analyse linguistic patterns, and metadata to identify AI-generated phishing attempts before they reach employees, by analysing writing patterns and behavioural anomalies. AI can also flag unusual sender behaviour and improve detection of BEC attacks​. Similarly, detection algorithms can help verify the authenticity of communications and prevent impersonation scams. AI-powered biometric and audio analysis tools detect deepfake media by identifying voice and video inconsistencies. *However, real-time deepfake detection remains a challenge, as technology continues to evolve. User Education&AI-Powered Security Awareness Training: AI-powered platforms (e.g., KnowBe4's AIDA) deliver personalised security awareness training, simulating AI-generated attacks to educate users on evolving threats, helping train employees to recognise deceptive AI-generated content​ and strengthen their individual susceptility factors and vulnerabilities. Adversarial AI Countermeasures: Just as cybercriminals use AI to bypass security, defenders can employ adversarial AI techniques, for example deploying deception technologies—such as AI-generated honeypots—to mislead and track attackers, as well as continuously training defensive AI models to recognise and counteract evolving attack patterns. Using AI to Fight AI-Driven Misinformation and Scams: AI-powered tools can detect synthetic text and deepfake misinformation, assisting fact-checking and source validation. Fraud detection models can analyse news sources, financial transactions, and AI-generated media to flag manipulation attempts​. Counter-attacks, like shown by research project Countercloud ( or O2 Telecoms AI agent 'Daisy' ( show how AI based bots and deepfake real-time voice chatbots can be used to counter disinformation campaigns as well as scammers by engaging them in endless conversations to waste their time and reducing their ability to target real victims​. In a future where both attackers and defenders use AI, defenders need to be aware of how adversarial AI operates and how AI can be used to defend against their attacks. In this fast-paced environment, organisations need to guard against their greatest enemy: their own complacency, while at the same time considering AI-driven security solutions thoughtfully and deliberately. Rather than rushing to adopt the next shiny AI security tool, decision makers should carefully evaluate AI-powered defences to ensure they match the sophistication of emerging AI threats. Hastily deploying AI without strategic risk assessment could introduce new vulnerabilities, making a mindful, measured approach essential in securing the future of cybersecurity. To stay ahead in this AI-powered digital arms race, organisations should: ✅Monitor both the threat and AI landscape to stay abreast of latest developments on both sides. ✅ Train employees frequently on latest AI-driven threats, including deepfakes and AI-generated phishing. ✅ Deploy AI for proactive cyber defense, including threat intelligence and incident response. ✅ Continuously test your own AI models against adversarial attacks to ensure resilience. Distributed by APO Group on behalf of KnowBe4.

Cybersecurity threats are mounting, but it's Gen Zs and Alphas who are introducing risks - not 'older folks'
Cybersecurity threats are mounting, but it's Gen Zs and Alphas who are introducing risks - not 'older folks'

Zawya

time05-02-2025

  • Business
  • Zawya

Cybersecurity threats are mounting, but it's Gen Zs and Alphas who are introducing risks - not 'older folks'

By Anna Collard, SVP Content Strategy&Evangelist at KnowBe4 Africa ( With growing cybersecurity concerns top of mind for many organisations this year, recognising the varying approaches that different generations have to digital safety is an important component of effective security cultures. Even though younger generations grew up in a hyperconnected world, their overconfidence and lax approach to cybersecurity precautions are potentially putting organisations at great risk. According to a 2022-survey by Ernst&Young (EY) ( almost half of Gen Z respondents (48%) say they take cybersecurity protection on their personal devices more seriously than on their work devices. The same survey found that Gen Z workers are far more likely than older employees to use the same password for professional and personal accounts and to ignore important IT updates. Even though Gen Z (born between 1997 and 2012) and Gen Alpha (born after 2013) ( have grown up on a steady diet of tablets, smartphones, and social media, their vast exposure to the digital world - and the confidence it's brought about - makes them increasingly susceptible to cyber threats, particularly in the face of AI-powered attacks. This vulnerability is evident from the fact that 72% admit to clicking on suspicious links at work ( a figure that is far higher than that among older generations. Gen Z's elevated risk profile Unlike millennials and older generations, Gen Z and Gen Alpha have grown up in a fully connected world. Their awareness of technology is instinctive rather than learned - but this has both negative and positive side effects. On the plus side, they may instinctively understand certain risks, but paradoxically are therefore less concerned about them, such as when it comes to sharing personal information. These younger adults exhibit a classic case of the Dunning-Kruger effect ( they overestimate their cybersecurity knowledge, while lacking the overall competence needed to recognise that they are, in fact, not proficient. This may make them resistant to training from older generations, whom they feel know less about technology than they do. Because they're more comfortable sending messages via social media, Gen Z and Gen Alpha are, for instance, more vulnerable to phishing emails. The EY survey found that despite being digital natives, only 31% of Gen Z-respondents actually feel confident in identifying phishing emails ( In addition, their love of media-multitasking makes them more distracted and therefore more susceptible to social engineering threats. Another risk is that younger employees tend to mix personal and work devices, increasing organisations' exposure to security vulnerabilities. Moreover, digital-first employees may resist traditional security systems at work, viewing them as inefficient or unnecessary. The key differences relating to cybersecurity to be aware of among various generations in the workplace are: Millennials: More cautious, as they witnessed the rise of the internet and early cybercrime. Tend to follow traditional cybersecurity protocols, like password rotation and antivirus usage. Gen Z/Alpha: Exhibit more trust in tech solutions like password managers, but are less vigilant with manual precautions. More reliant on AI-based protections and quick fixes, leading to assumptions that systems are inherently secure. ​ Building an intergenerational cybersecurity culture Knowing younger generations' different approaches to learning and technology can make it easier for cybersecurity training programmes to really work. Forget old-school compliance training: standardised cybersecurity training might not connect well with Gen Z employees. If you want to grab their attention, use gamified learning platforms to make training interactive and fun. Not only will they be more engaged, but you'll be aligning the training with their tech-savvy nature and familiarity with social media, making it more impactful. Gen Z and Alpha thrive on bite-sized content, being far more likely to consult TikTok to learn something new than consult their parents ( Organisations can take advantage of this by creating short, engaging, and mobile-friendly lessons that resonate with younger generations. Another way to make cybersecurity risks hit home is by incorporating real-life examples into training sessions. Because younger employees may not fully understand the consequences of cyber risks, case studies are useful in pointing out the impact that cyberattacks can have on individuals and organisations, such as losing your job or costing the organisation millions of rands in damage. Bridging this awareness gap can also be done by encouraging intergenerational collaboration at work. Younger employees can learn from the experience and insights of older workers while also providing great insights and wisdom by sharing their perspectives too. Mentorship and knowledge exchange programmes where experienced employees can guide but also listen and try to learn from the Gen Z workers will solidify your organisation's cybersecurity culture. This bridge can also be crossed by encouraging collaborative learning. Younger employees are far more likely to embrace cybersecurity initiatives when they feel involved and their input is actively welcomed. By tailoring cybersecurity training to the unique characteristics and preferences of each generation, organisations can create more effective and engaging programmes. In this way, workplaces can cultivate a culture of shared responsibility and ongoing improvement by empowering Gen Z with a sense of ownership and autonomy. Distributed by APO Group on behalf of KnowBe4.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store