Latest news with #AnnaCollard


Arabian Post
3 days ago
- Business
- Arabian Post
Generative AI Tools Expose Corporate Secrets Through User Prompts
A significant portion of employee interactions with generative AI tools is inadvertently leaking sensitive corporate data, posing serious security and compliance risks for organisations worldwide. A comprehensive analysis by Harmonic Security, involving tens of thousands of prompts submitted to platforms such as ChatGPT, Copilot, Claude, Gemini, and Perplexity, revealed that 8.5% of these interactions contained sensitive information. Notably, 45.77% of the compromised data pertained to customer information, including billing details and authentication credentials. Employee-related data, such as payroll records and personal identifiers, constituted 26.68%, while legal and financial documents accounted for 14.95%. Security-related information, including access keys and internal protocols, made up 6.88%, and proprietary source code comprised 5.64% of the sensitive data identified. The prevalence of free-tier usage among employees exacerbates the risk. In 2024, 63.8% of ChatGPT users operated on the free tier, with 53.5% of sensitive prompts entered through these accounts. Similar patterns were observed across other platforms, with 58.62% of Gemini users, 75% of Claude users, and 50.48% of Perplexity users utilizing free versions. These free tiers often lack robust security features, increasing the likelihood of data exposure. ADVERTISEMENT Anna Collard, Senior Vice President of Content Strategy & Evangelist at KnowBe4 Africa, highlighted the unintentional nature of these data leaks. She noted that users often underestimate the sensitivity of the information they input into AI platforms, leading to inadvertent disclosures. Collard emphasized that the casual and conversational nature of generative AI tools can lower users' guards, resulting in the sharing of confidential information that, when aggregated, can be exploited by malicious actors for targeted attacks. The issue is compounded by the lack of comprehensive governance policies within organizations. A study by Dimensional Research and SailPoint found that while 96% of IT professionals acknowledge the security threats posed by autonomous AI agents, only 54% have full visibility into AI agent activities, and a mere 44% have established governance policies. Furthermore, 23% of IT professionals reported instances where AI agents were manipulated into revealing access credentials, and 80% observed unintended actions by these agents, such as accessing unauthorized systems or sharing inappropriate data. The rapid adoption of generative AI tools, driven by their potential to enhance productivity and innovation, has outpaced the development of adequate security measures. Organizations are now grappling with the challenge of balancing the benefits of AI integration with the imperative to protect sensitive data. Experts advocate for the implementation of stringent oversight mechanisms, including robust access controls and comprehensive user education programs, to mitigate the risks associated with generative AI usage.

Zawya
3 days ago
- Business
- Zawya
Perilous prompts: How generative Artificial Intelligence (AI) is leaking companies' secrets
Beneath the surface of GenAI's outputs lies a massive, mostly unregulated engine powered by data – your data. And whether it's through innocent prompts or habitual oversharing, users are feeding these machines with information that, in the wrong hands, becomes a security time bomb. A recent Harmonic report ( found that 8.5% of employee prompts to generative AI tools like ChatGPT and Copilot included sensitive data – most notably customer billing and authentication information – raising serious security, compliance, and privacy risks. Since ChatGPT's 2022 debut, generative AI has exploded in popularity and value – surpassing $25 billion in 2024 ( – but its rapid rise brings risks many users and organisations still overlook. 'One of the privacy risks when using AI platforms is unintentional data leakage,' warns Anna Collard, SVP Content Strategy&Evangelist at KnowBe4 Africa. 'Many people don't realise just how much sensitive information they're inputting.' Your data is the new prompt It's not just names or email addresses that get hoovered up. When an employee asks a GenAI assistant to 'rewrite this proposal for client X' or 'suggest improvements to our internal performance plan,' they may be sharing proprietary data, customer records, or even internal forecasts. If done via platforms with vague privacy policies or poor security controls, that data may be stored, processed, or – worst-case scenario – exposed. And the risk doesn't end there. 'Because GenAI feels casual and friendly, people let their guard down,' says Collard. 'They might reveal far more than they would in a traditional work setting – interests, frustrations, company tools, even team dynamics.' In aggregate, these seemingly benign details can be stitched into detailed profiles by cybercriminals or data brokers – fuelling targeted phishing, identity theft, and sophisticated social engineering. A surge of niche platforms, a bunch of new risks Adding fuel to the fire is the rapid proliferation of niche AI platforms. Tools for generating product mock-ups, social posts, songs, resumes, or legalese are sprouting up at speed – many of them developed by small teams using open-source foundation models. While these platforms may be brilliant at what they do, they may not offer the hardened security architecture of enterprise-grade tools. 'Smaller apps are less likely to have been tested for edge-case privacy violations or undergone rigorous penetration tests and security audits,' says Collard. 'And many have opaque or permissive data usage policies.' Even if an app's creators have no malicious intent, weak oversight can lead to major leaks. Collard warns that user data could end up in: ● Third-party data broker databases ● AI training sets without consent ● Cybercriminal marketplaces following a breach In some cases, the apps might themselves be fronts for data-harvesting operations. From individual oversights to corporate exposure The consequences of oversharing aren't limited to the person typing the prompt. 'When employees feed confidential information into public GenAI tools, they can inadvertently expose their entire company,' ( explains Collard. 'That includes client data, internal operations, product strategies – things that competitors, attackers, or regulators would care deeply about.' While unauthorised shadow AI remains a major concern, the rise of semi-shadow AI – paid tools adopted by business units without IT oversight – is increasingly risky, with free-tier generative AI apps like ChatGPT responsible for 54% of sensitive data leaks due to permissive licensing and lack of controls, according to the Harmonic report. So, what's the solution? Responsible adoption starts with understanding the risk – and reining in the hype. 'Businesses must train their employees on which tools are ok to use, and what's safe to input and what isn't," says Collard. 'And they should implement real safeguards – not just policies on paper. 'Cyber hygiene now includes AI hygiene.' 'This should include restricting access to generative AI tools without oversight or only allowing those approved by the company.' 'Organisations need to adopt a privacy-by-design approach ( when it comes to AI adoption,' she says. 'This includes only using AI platforms with enterprise-level data controls and deploying browser extensions that detect and block sensitive data from being entered.' As a further safeguard, she believes internal compliance programmes should align AI use with both data protection laws and ethical standards. 'I would strongly recommend companies adopt ISO/IEC 42001 ( an international standard that specifies requirements for establishing, implementing, maintaining and continually improving an Artificial Intelligence Management System (AIMS),' she urges. Ultimately, by balancing productivity gains with the need for data privacy and maintaining customer trust, companies can succeed in adopting AI responsibly. As businesses race to adopt these tools to drive productivity, that balance – between 'wow' and 'whoa' – has never been more crucial. Distributed by APO Group on behalf of KnowBe4.

Zawya
19-05-2025
- Zawya
The Digital Divide's Dark Side: Cybersecurity in African Higher Education (By Anna Collard)
By Anna Collard, SVP Content Strategy&Evangelist KnowBe4 Africa ( The digital revolution is transforming African education, with universities embracing online learning and digital systems. However, this progress brings a crucial challenge: cybersecurity. Are African higher education institutions (HEIs) prepared for the escalating cyber threats? The Growing Threat Landscape African HEIs are increasingly targeted by cybercriminals. Microsoft's Cyber Signals report highlights education as the third most targeted sector globally ( with Africa being a particularly vulnerable region. Incidents like the theft of sensitive data ( at Tshwane University of Technology (TUT) and the hacking of a master's degree platform ( at Abdelmalek Essaadi University in Morocco demonstrate the reality of these threats. Several factors contribute to HEI vulnerability. Universities hold vast amounts of sensitive data, including student records, research, and intellectual property. Their open nature, with diverse users and international collaborations, creates weaknesses, especially in email systems. Limited resources, legacy systems, and a lack of awareness further exacerbate these issues. Examples of Cyber Threats in African Education Educational institutions have fallen prey to social engineering and spoofing attacks. For example, universities in Mpumalanga and schools in the Eastern Cape have been notably victimised by cybercriminals ( using link-based ransomware attacks, with some institutions being locked out of their data for over a year. Earlier this year, the KwaZulu-Natal Department of Education warned against a cybercriminal scamming job seekers ( by falsely promising teaching posts in exchange for money and using photos with officials to appear legitimate. Strategies for Strengthening Cybersecurity African HEIs can take actionable steps to strengthen their cyber defenses: Establish Clear Policies: Define roles, responsibilities, and data security protocols Provide Regular Training: Educate educators, administrators, and students to improve cyber hygiene and security culture Implement Secure Access Management: Enforce multi-factor authentication (MFA) and secure login practices Invest in Secure Technology Infrastructure: Include encrypted data storage, secure internet connections, and reliable software updates Leverage AI and Advanced Technologies: AI can be utilised to enhance threat detection and enable real-time responses. Consider centralising tech setups for better monitoring Adopt Comprehensive Cybersecurity Frameworks: Follow guidelines like those from the National Institute of Standards and Technology (NIST) and encourage phishing-resistant MFA, reducing hacking risks by over 99.9% Human Risk Management as a Priority: Focus on security awareness training, that includes simulated phishing, and real-time interventions to change behaviour and mitigate human risk Moving Forward The cybersecurity challenges facing African HEIs are significant but not impossible. By adopting a human risk approach and acknowledging threats, implementing strong security measures, and fostering a positive security culture, we can protect institutions and ensure a secure digital learning environment. A collective effort involving institutions, governments, cybersecurity experts, and technology providers is crucial to safeguard the future of education in Africa. As part of efforts to strengthen cybersecurity awareness in the education sector, KnowBe4 offers a Student Edition—a version of its platform tailored to the unique needs of educational institutions, providing age-appropriate, relevant security content and training solutions. This initiative is guided by an Advisory Council of global universities, including Nelson Mandela University in South Africa, ensuring the content remains practical, culturally relevant, and aligned with the realities of student life. Distributed by APO Group on behalf of KnowBe4.

Zawya
05-05-2025
- Business
- Zawya
Why Empowered People Are the Real Cyber Superpower
It's time to retire the tired narrative that employees are the 'weakest link' in cybersecurity. They're not. They're simply the most frequently targeted. And that makes sense – if you're a cybercriminal, why brute-force your way into secure systems when you can just trick a human? And that is why over-relying on technical controls only goes wrong. So is treating users like liabilities to be controlled, rather than assets to be empowered. One of the core principles of Human Risk Management (HRM) is not about shifting blame, but about enabling better decisions at every level. It's a layered, pragmatic strategy that combines technology, culture, and behaviour design to reduce human cyber risk in a sustainable way. And it recognises this critical truth: your people can be your greatest defence – if you equip them well. The essence of HRM is empowering individuals to make better risk decisions, but it's even more than that. 'With the right combination of tools, culture and security practices, employees become an extension of your security programme, rather than just an increased attack surface,' asserts Anna Collard, SVP Content Strategy&Evangelist at KnowBe4 Africa. A recent IBM study revealed that more than 90% of all cybersecurity breaches can be traced back to human error ( due to employees being successfully exploited through phishing scams, their use of weak passwords or non-optimal handling of sensitive data. Companies have long seen the upward trend in this threat, thanks to numerous studies, and subsequently employees are often judged to be the biggest risk companies need to manage. This perspective, though, is denying businesses the opportunity to develop the best defence they could have: empowered, proactive employees at the frontline; not behind it. Shield users – but also train them through exposure Of course, the first thing companies should do is protect and shield employees from real threats. Prevention and detection technologies – email gateway filters, endpoint protection, AI-driven analysis – are essential to keeping malicious content from ever reaching user's inboxes or devices. But here's the catch: if users are never exposed to threats, they don't build the muscle to recognise them when they do get through. Enter the prevalence effect – a cognitive bias which shows that the less frequently someone sees a threat (like a phishing email), the less likely they are to spot it when it finally appears. It's a fascinating and slightly counterintuitive insight: in trying to protect users too much, we may be making them more vulnerable. That's why simulated phishing campaigns and realistic training scenarios are so critical. They provide safe, controlled exposure to common attack tactics – so people can develop the reflexes, pattern recognition, and critical thinking needed to respond wisely in real situations. Many of today's threats don't just rely on tech vulnerabilities – they exploit human attention. Attackers leverage stress, urgency, and distraction to bypass logic and trigger impulsive actions. Whether it's phishing, smishing, deepfakes, or voice impersonation scams, the aim is the same: manipulate humans to bypass scrutiny. That's why a foundational part of HRM is building what I call digital mindfulness – the ability to pause, observe, and evaluate before acting. This isn't abstract wellness talk; it's a practical skill that helps people notice deception tactics in real-time and stay in their system (critical thinking mode) instead of reacting on autopilot. Tools such as systems-based interventions, prompts, nudges or second chance reminders are ways to induce this friction to encourage pausing when and if it matters. 'Every day, employees face a growing wave of sophisticated, AI-powered attacks designed to exploit human vulnerabilities, not just technical ones. As attackers leverage automation, AI and social engineering at scale, traditional training just isn't effective enough.' Protection requires layered defence 'Just as businesses manage technical vulnerabilities, they need to manage human risk – through a blend of policy, technology, culture, ongoing education, and personalised interventions,' says Collard. This layered approach extends beyond traditional training. System-based interventions – such as smart prompts, real-time nudges, and in-the-moment coaching – can slow users down at critical decision points, helping them make safer choices. Personalised micro-learning, tailored to an individual's role, risk profile, and behavioural patterns, adds another important layer of defence. Crucially, Collard emphasises that zero trust shouldn't apply only to systems. 'We need to adopt the same principle with human behaviour,' she explains. 'Never assume awareness. Always verify understanding, and continuously reinforce it.' To make this concept more accessible, the acronym D.E.E.P., a framework for human-centric defence: Defend: Use technology and policy to block as many threats as possible before they reach the user. Educate: Deliver relevant, continuous training, simulations, and real-time coaching to build awareness and decision-making skills. Empower: Foster a culture where employees feel confident to report incidents without fear of blame or repercussions. Protect: Share threat intelligence transparently, and treat mistakes as learning opportunities, not grounds for shame. 'Fear-based security doesn't empower people,' she explains. 'It reinforces the idea that employees are weak points who need to be kept behind the frontline. But with the right support, they can be active defenders—and even your first line of defence.' Empowered users are part of your security fabric When people are trained, supported, and mentally prepared—not just lectured at once a year – they become a dynamic extension of your cybersecurity posture. They're not hiding behind the firewall; they are part of it. With attacks growing in scale and sophistication, it's not enough to rely on software alone. Businesses need a human layer that is just as adaptive, resilient, and alert. That means replacing blame culture with a learning culture. It means seeing people not as the problem, but as part of the solution. Because the truth is: the best defence isn't a perfect system. It's a well-prepared person who knows how to respond when something slips through. 'Human behaviour is beautifully complex,' Collard concludes. 'That's why a layered approach to HRM – integrating training, technology, processes and cognitive readiness – is essential. With the right support, employees can shift from being targets to becoming trusted defenders.' Distributed by APO Group on behalf of KnowBe4.

Zawya
14-04-2025
- Politics
- Zawya
A New Era of Manipulation: How Deepfakes and Disinformation Threaten Business (By Anna Collard)
By Anna Collard, SVP Content Strategy&Evangelist, KnowBe4 Africa ( Last weekend, at a typical South African braai (barbeque), I found myself in a heated conversation with someone highly educated—yet passionately defending a piece of Russian propaganda that had already been widely debunked. It was unsettling. The conversation quickly became irrational, emotional, and very uncomfortable. That moment crystallised something for me: we're no longer just approaching an era where truth is under threat—we're already living in it. A reality where falsity feels familiar, and information is weaponised to polarize societies and manipulate our belief systems. And now, with the democratisation of AI tools like deepfakes, anyone with enough intent can impersonate authority, generate convincing narratives, and erode trust—at scale. The Evolution of Disinformation: From Election Interference to Enterprise Exploitation The 2024 KnowBe4 Political Disinformation in Africa Survey ( revealed a striking contradiction: while 84% of respondents use social media as their main news source, 80% admit that most fake news originates there. Despite this, 58% have never received any training on identifying misinformation. This confidence gap echoes findings in the Africa Cybersecurity&Awareness 2025 Report, ( where 83% of respondents said they'd recognise a security threat if they saw one—yet 37% had fallen for fake news or disinformation, and 35% had lost money due to a scam. What's going wrong? It's not a lack of intelligence—it's psychology. The Psychology of Believing the Untrue Humans are not rational processors of information; we're emotional, biased, and wired to believe things that feel easy and familiar. Disinformation campaigns—whether political or criminal—exploit this. The Illusory Truth Effect: The easier something is to process, the more likely we are to believe it—even if it's false (Unkelbach et al., 2019). Fake content often uses bold headlines, simple language, and dramatic visuals that 'feel' true. The Mere Exposure Effect: The more often we see something, the more we tend to like or accept it—regardless of its accuracy (Zajonc, 1968). Repetition breeds believability. Confirmation Bias: We're more likely to believe and even share false information when it aligns with our values or beliefs. A recent example is the viral deepfake image of Hurricane Helena shared across social media. Despite fact-checkers clearly identifying it as fake, the post continued to spread ( Why? Because it resonated emotionally with users' felt frustration and emotional frame of mind. Deepfakes and State-Sponsored Deception According to the Africa Centre for Strategic Studies, disinformation campaigns on the continent have nearly quadrupled since 2022. Even more troubling: nearly 60% are state-sponsored, often aiming to destabilise democracies and economies. The rise of AI-assisted manipulation adds fuel to this fire. Deepfakes now allow anyone to fabricate video or audio that's nearly indistinguishable from the real thing. Why This Matters for Business This isn't just about national security or political manipulation —it's about corporate survival too. Today's attackers don't need to breach your firewall. They can trick your people. This has already led to corporate-level losses, like the Hong Kong finance employee tricked into transferring over $25 million during a fake video call with deepfaked 'executives.' These corporate disinformation or narrative based attack can also result in: Fake press releases can tank your stock. Deepfaked CEOs can authorise wire transfers. Viral falsehoods can ruin reputations before PR even logs in. The WEF's 2024 Global Risk Report named misinformation and disinformation as the top global risk, surpassing even climate and geopolitical instability. That's a red flag businesses cannot ignore. The convergence of state-sponsored disinformation, AI-enabled fraud, and employee overconfidence creates a perfect storm. Combating this new frontier of cyber risk requires more than just better firewalls. It demands informed minds, digital humility, and resilient cultures. Building Cognitive Resilience What can be done? While AI-empowered defenses can help improve detection capabilities, technology alone won't save us. Organisations must also build cognitive immunity—the ability for employees to discern, verify, and challenge what they see and hear. Adopt a Zero Trust Mindset—Everywhere Just as systems don't trust a device or user by default, people should treat information the same way, with a healthy dose of scepticism. Encourage employees to verify headlines, validate sources, and challenge urgency or emotional manipulation—even when it looks or sounds familiar. Introduce Digital Mindfulness Training Train employees to pause, reflect, and evaluate before they click, share, or respond. This awareness helps build cognitive resilience—especially against emotionally manipulative or repetitive content designed to bypass critical thinking. Educate on deepfakes, synthetic media, AI impersonation, and narrative manipulation. Build understanding of how human psychology is exploited—not just technology. Treat Disinformation Like a Threat Vector Monitor for fake press releases, viral social media posts, or impersonation attempts targeting your brand, leaders, or employees. Include reputational risk in your incident response plans. The battle against disinformation isn't just a technical one—it's psychological. In a world where anything can be faked, the ability to pause, think clearly, and question intelligently is a vital layer of security. Truth has become a moving target. In this new era, clarity is a skill that we need to hone. Distributed by APO Group on behalf of KnowBe4.