Why Empowered People Are the Real Cyber Superpower
It's time to retire the tired narrative that employees are the 'weakest link' in cybersecurity. They're not. They're simply the most frequently targeted. And that makes sense – if you're a cybercriminal, why brute-force your way into secure systems when you can just trick a human?
And that is why over-relying on technical controls only goes wrong. So is treating users like liabilities to be controlled, rather than assets to be empowered.
One of the core principles of Human Risk Management (HRM) is not about shifting blame, but about enabling better decisions at every level. It's a layered, pragmatic strategy that combines technology, culture, and behaviour design to reduce human cyber risk in a sustainable way. And it recognises this critical truth: your people can be your greatest defence – if you equip them well.
The essence of HRM is empowering individuals to make better risk decisions, but it's even more than that. 'With the right combination of tools, culture and security practices, employees become an extension of your security programme, rather than just an increased attack surface,' asserts Anna Collard, SVP Content Strategy&Evangelist at KnowBe4 Africa.
A recent IBM study revealed that more than 90% of all cybersecurity breaches can be traced back to human error (https://apo-opa.co/3GGeSBF) due to employees being successfully exploited through phishing scams, their use of weak passwords or non-optimal handling of sensitive data. Companies have long seen the upward trend in this threat, thanks to numerous studies, and subsequently employees are often judged to be the biggest risk companies need to manage. This perspective, though, is denying businesses the opportunity to develop the best defence they could have: empowered, proactive employees at the frontline; not behind it.
Shield users – but also train them through exposure
Of course, the first thing companies should do is protect and shield employees from real threats. Prevention and detection technologies – email gateway filters, endpoint protection, AI-driven analysis – are essential to keeping malicious content from ever reaching user's inboxes or devices. But here's the catch: if users are never exposed to threats, they don't build the muscle to recognise them when they do get through.
Enter the prevalence effect – a cognitive bias which shows that the less frequently someone sees a threat (like a phishing email), the less likely they are to spot it when it finally appears. It's a fascinating and slightly counterintuitive insight: in trying to protect users too much, we may be making them more vulnerable.
That's why simulated phishing campaigns and realistic training scenarios are so critical. They provide safe, controlled exposure to common attack tactics – so people can develop the reflexes, pattern recognition, and critical thinking needed to respond wisely in real situations.
Many of today's threats don't just rely on tech vulnerabilities – they exploit human attention. Attackers leverage stress, urgency, and distraction to bypass logic and trigger impulsive actions. Whether it's phishing, smishing, deepfakes, or voice impersonation scams, the aim is the same: manipulate humans to bypass scrutiny.
That's why a foundational part of HRM is building what I call digital mindfulness – the ability to pause, observe, and evaluate before acting. This isn't abstract wellness talk; it's a practical skill that helps people notice deception tactics in real-time and stay in their system (critical thinking mode) instead of reacting on autopilot. Tools such as systems-based interventions, prompts, nudges or second chance reminders are ways to induce this friction to encourage pausing when and if it matters.
'Every day, employees face a growing wave of sophisticated, AI-powered attacks designed to exploit human vulnerabilities, not just technical ones. As attackers leverage automation, AI and social engineering at scale, traditional training just isn't effective enough.'
Protection requires layered defence
'Just as businesses manage technical vulnerabilities, they need to manage human risk – through a blend of policy, technology, culture, ongoing education, and personalised interventions,' says Collard.
This layered approach extends beyond traditional training. System-based interventions – such as smart prompts, real-time nudges, and in-the-moment coaching – can slow users down at critical decision points, helping them make safer choices. Personalised micro-learning, tailored to an individual's role, risk profile, and behavioural patterns, adds another important layer of defence.
Crucially, Collard emphasises that zero trust shouldn't apply only to systems. 'We need to adopt the same principle with human behaviour,' she explains. 'Never assume awareness. Always verify understanding, and continuously reinforce it.'
To make this concept more accessible, the acronym D.E.E.P., a framework for human-centric defence:
Defend: Use technology and policy to block as many threats as possible before they reach the user.
Educate: Deliver relevant, continuous training, simulations, and real-time coaching to build awareness and decision-making skills.
Empower: Foster a culture where employees feel confident to report incidents without fear of blame or repercussions.
Protect: Share threat intelligence transparently, and treat mistakes as learning opportunities, not grounds for shame.
'Fear-based security doesn't empower people,' she explains. 'It reinforces the idea that employees are weak points who need to be kept behind the frontline. But with the right support, they can be active defenders—and even your first line of defence.'
Empowered users are part of your security fabric
When people are trained, supported, and mentally prepared—not just lectured at once a year – they become a dynamic extension of your cybersecurity posture. They're not hiding behind the firewall; they are part of it.
With attacks growing in scale and sophistication, it's not enough to rely on software alone. Businesses need a human layer that is just as adaptive, resilient, and alert. That means replacing blame culture with a learning culture. It means seeing people not as the problem, but as part of the solution.
Because the truth is: the best defence isn't a perfect system. It's a well-prepared person who knows how to respond when something slips through.
'Human behaviour is beautifully complex,' Collard concludes. 'That's why a layered approach to HRM – integrating training, technology, processes and cognitive readiness – is essential. With the right support, employees can shift from being targets to becoming trusted defenders.'
Distributed by APO Group on behalf of KnowBe4.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The National
4 days ago
- The National
IBM and Inclusive Brains team up to develop mind-controlled computing
A new partnership between IBM and Inclusive Brains could bring mind-controlled computing closer to everyday use. By combining artificial intelligence, quantum computing and non-invasive neurotechnology, the companies aim to redefine how the brain communicates with devices. It was announced on Tuesday that the tech giant and the French neurotechnology start-up, which specialises in non-invasive brain-machine interfaces (BMIs), have entered a joint study agreement to explore how advanced technology can improve the classification of brain activity patterns. The study will use IBM's Granite foundation models to 'generate and review code, to then create benchmarks to test hundreds of thousands of machine learning algorithmic combinations, in order to help identify the most efficient algorithms for classification and interpretation of one's brain activity', it said. The study will also explore the use of quantum machine learning techniques to classify brain activity, as well as methods for automatically selecting the most effective algorithms tailored to each person. These algorithms will be used to support 'mental commands' – actions triggered without speech or physical movement – to control digital workstations. Unlike invasive devices such as Elon Musk's Neuralink or Synchron, which recently partnered with Nvidia and requires surgical implants, Inclusive Brains' multimodal interface interprets brainwaves, facial expressions, eye movements and other physiological signals to infer intent and translate it into action. The research marks a step towards building more adaptive interfaces for users with disabilities, as well as those working in cognitively demanding environments. The results from the study will be published as open science to support wider research and public understanding. The collaboration also draws on existing ethical guidelines for the use of neurotechnology and neural data, including frameworks previously endorsed by IBM. 'We are particularly proud to engage with innovative start-ups such as Inclusive Brains and to contribute to a technology that supports advancing health care for the benefit of the general population, by providing access to IBM's AI and quantum technologies in a responsible manner,' said Beatrice Kosowski, president of IBM France. Prof Olivier Oullier, chief executive and co-founder of Inclusive Brains, said: 'Our joint study with IBM will help Inclusive Brains develop technology for deeply personalised interactions between machines and their users. 'We're transitioning from the era of generic interfaces to that of bespoke solutions, crafted to adapt to each individual's unique physicality, cognitive diversity and needs.' The announcement also notes Prof Oullier's new academic role as visiting professor in the department of human-computer interaction at the Mohamed bin Zayed University of Artificial Intelligence in Abu Dhabi. Inclusive Brains has been testing its systems in medical settings. In partnership with orthopaedic surgeon Dr Sebastien Parratte, of the International Knee and Joint Centre in Abu Dhabi, its Prometheus BCI interface has been used in operations to 'provide real-time measures of the level of stress, attention and cognitive load of the surgeon". The aim is to evaluate how such data might support concentration and reduce errors in high-pressure environments. The company has demonstrated its technology publicly, including when a woman with physical and cognitive impairments controlled a robotic arm to allow her to carry the Olympic torch in France last year. The technology was also used to send a tweet to French President Emmanuel Macron and contribute a text amendment to a bill in the French Parliament. Founded by Prof Oullier and Paul Barbaste, Inclusive Brains develops interfaces that respond to the physical and cognitive characteristics of users. Its current research explores how such systems could help with decision making and concentration, and reduce strain from prolonged use of technology.


Tahawul Tech
4 days ago
- Tahawul Tech
Tenable reveals Global Partner Award winners
Tenable®, the exposure management company, recently announced the recipients of its Global Partner Awards during Tenable AssureWorld — the company's fifth annual virtual partner conference. Those honoured this year include IBM — Global System Integrator of the Year; Siemens Energy — Tenable OT Security Partner of the Year; Telefonica — MSSP Partner of the Year; and AWS — Global Technology Partner of the Year. Tenable also crowned its regional Partners of the Year which recognises those partners who consistently surpass expectations in collaboration and contribution throughout the year. This year's winners are: Asia Pacific and Japan – DXC Europe, the Middle East and Africa – Softcat (UKI) Latin America – Global Sec Tecnologia North America – CDW Public Sector – SHI 'As a partner-first company, Tenable is hyper-focused on investing in and supporting channel partners, promoting collective success', said Jeff Brooks, Senior Vice President of Global Channels, Tenable. 'Our Global Partner Awards recognise partners whose dedication and collaboration with Tenable deliver truly exceptional outcomes in helping customers eradicate priority cyber weaknesses and protect against attacks'. Tenable AssureWorld is an exclusive event that allows Tenable and its partners to come together to learn and share information. The conference provides insights from top executives on Tenable's vision, revenue strategy, customer-focused business strategy, product roadmap, and other key areas of cybersecurity. In addition to providing resellers, distributors, MSSPs, and systems integrators with innovative exposure management solutions, the Tenable Assure Partner Program arms partners with sales and marketing assistance, training and certification opportunities, services-delivery certification and technical support to grow their business and deliver exceptional exposure management and risk mitigation. More information on the Tenable Assure Partner Program is available at: Image Credit: Tenable


Arabian Post
5 days ago
- Arabian Post
Generative AI Tools Expose Corporate Secrets Through User Prompts
A significant portion of employee interactions with generative AI tools is inadvertently leaking sensitive corporate data, posing serious security and compliance risks for organisations worldwide. A comprehensive analysis by Harmonic Security, involving tens of thousands of prompts submitted to platforms such as ChatGPT, Copilot, Claude, Gemini, and Perplexity, revealed that 8.5% of these interactions contained sensitive information. Notably, 45.77% of the compromised data pertained to customer information, including billing details and authentication credentials. Employee-related data, such as payroll records and personal identifiers, constituted 26.68%, while legal and financial documents accounted for 14.95%. Security-related information, including access keys and internal protocols, made up 6.88%, and proprietary source code comprised 5.64% of the sensitive data identified. The prevalence of free-tier usage among employees exacerbates the risk. In 2024, 63.8% of ChatGPT users operated on the free tier, with 53.5% of sensitive prompts entered through these accounts. Similar patterns were observed across other platforms, with 58.62% of Gemini users, 75% of Claude users, and 50.48% of Perplexity users utilizing free versions. These free tiers often lack robust security features, increasing the likelihood of data exposure. ADVERTISEMENT Anna Collard, Senior Vice President of Content Strategy & Evangelist at KnowBe4 Africa, highlighted the unintentional nature of these data leaks. She noted that users often underestimate the sensitivity of the information they input into AI platforms, leading to inadvertent disclosures. Collard emphasized that the casual and conversational nature of generative AI tools can lower users' guards, resulting in the sharing of confidential information that, when aggregated, can be exploited by malicious actors for targeted attacks. The issue is compounded by the lack of comprehensive governance policies within organizations. A study by Dimensional Research and SailPoint found that while 96% of IT professionals acknowledge the security threats posed by autonomous AI agents, only 54% have full visibility into AI agent activities, and a mere 44% have established governance policies. Furthermore, 23% of IT professionals reported instances where AI agents were manipulated into revealing access credentials, and 80% observed unintended actions by these agents, such as accessing unauthorized systems or sharing inappropriate data. The rapid adoption of generative AI tools, driven by their potential to enhance productivity and innovation, has outpaced the development of adequate security measures. Organizations are now grappling with the challenge of balancing the benefits of AI integration with the imperative to protect sensitive data. Experts advocate for the implementation of stringent oversight mechanisms, including robust access controls and comprehensive user education programs, to mitigate the risks associated with generative AI usage.