logo
Broken Cyber Windows Theory (By Javvad Malik)

Broken Cyber Windows Theory (By Javvad Malik)

Zawya17-03-2025

By Javvad Malik, Lead Security Awareness Advocate at KnowBe4 (www.KnowBe4.com).
Have you ever walked down a street with broken windows, burnt out cars, graffiti and felt a bit uneasy? There's a reason for that, and it's not just about aesthetics. The Broken Windows Theory, introduced by social scientists James Q. Wilson and George L. Kelling in 1982, suggests that visible signs of crime and antisocial behavior encourage further crime and disorder. But what does this have to do with cybersecurity? More than you might think.
The Cybersecurity Parallel: Neglected Digital Environments
In many organizations, cybersecurity awareness feels like a losing battle. Employees ignore security policies, download unapproved software, and use weak passwords. It's as if our digital environments are full of "broken windows," signaling that it's a culture where no one really cares about security.
Traditional approaches often focus on punitive measures or dry, technical training that fails to engage employees. It's like trying to reduce crime by simply increasing fines, without addressing the underlying issues that make an area feel unsafe or neglected.
Applying the Broken Windows Theory to Cybersecurity
Just as fixing broken windows and cleaning up graffiti can reduce crime by fostering a sense of order and care, we can apply similar principles to our digital environments:
Create a Culture of Vigilance: Encourage employees to report potential security issues, no matter how small. This is like neighborhood watch programs for your network.
Address Small Issues Quickly: Respond promptly to minor security infractions. This shows that security is taken seriously at all levels.
Improve the "Look and Feel" of Security: Make security tools and processes user-friendly and aesthetically pleasing. A clean, well-designed security interface is like a well-maintained storefront.
Celebrate Security Wins: Publicly recognise employees who spot phishing attempts or follow good security practices. This is akin to community awards for neighborhood improvement.
Practical Steps for Implementation
Conduct a Digital Environment Audit
Walk through your organization's digital spaces as an average user would. Where are the "broken windows"? Look for outdated software, clunky security processes, or confusing policies.
Implement a "See Something, Say Something" Program
Create an easy way for employees to report potential security issues. Make it as simple as sending a quick message or clicking a button.
Redesign Security Communications
Transform your security awareness materials. Replace dense text with infographics, short videos, or even memes. Make security information as engaging as a well-designed public space.
Create Security Champions
Identify and empower individuals across departments to be security advocates. These champions can help maintain a secure "neighborhood" in their area of the organization.
Regular "Digital Community" Events
Host regular cybersecurity events that feel more like community gatherings than lectures. Think cybersecurity fairs, hacking demos, or even escape rooms with a security twist.
The Path to a Strong Security Culture
By applying the principles of the Broken Windows Theory to cybersecurity, we can create digital environments where security feels natural and everyone plays a part. It's not just about preventing breaches; it's about fostering a community where secure behavior is the norm.
As we move forward, let's reimagine our approach to cybersecurity awareness. Instead of building walls and enforcing rules, let's create digital neighborhoods where everyone takes pride in keeping things secure.
Every fixed "window" in your digital environment is a step towards a more secure future. So, let's roll up our sleeves and start cleaning up our digital streets. The neighborhood—and your data—will thank you.
Distributed by APO Group on behalf of KnowBe4.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Perilous prompts: How generative Artificial Intelligence (AI) is leaking companies' secrets
Perilous prompts: How generative Artificial Intelligence (AI) is leaking companies' secrets

Zawya

time3 days ago

  • Zawya

Perilous prompts: How generative Artificial Intelligence (AI) is leaking companies' secrets

Beneath the surface of GenAI's outputs lies a massive, mostly unregulated engine powered by data – your data. And whether it's through innocent prompts or habitual oversharing, users are feeding these machines with information that, in the wrong hands, becomes a security time bomb. A recent Harmonic report ( found that 8.5% of employee prompts to generative AI tools like ChatGPT and Copilot included sensitive data – most notably customer billing and authentication information – raising serious security, compliance, and privacy risks. Since ChatGPT's 2022 debut, generative AI has exploded in popularity and value – surpassing $25 billion in 2024 ( – but its rapid rise brings risks many users and organisations still overlook. 'One of the privacy risks when using AI platforms is unintentional data leakage,' warns Anna Collard, SVP Content Strategy&Evangelist at KnowBe4 Africa. 'Many people don't realise just how much sensitive information they're inputting.' Your data is the new prompt It's not just names or email addresses that get hoovered up. When an employee asks a GenAI assistant to 'rewrite this proposal for client X' or 'suggest improvements to our internal performance plan,' they may be sharing proprietary data, customer records, or even internal forecasts. If done via platforms with vague privacy policies or poor security controls, that data may be stored, processed, or – worst-case scenario – exposed. And the risk doesn't end there. 'Because GenAI feels casual and friendly, people let their guard down,' says Collard. 'They might reveal far more than they would in a traditional work setting – interests, frustrations, company tools, even team dynamics.' In aggregate, these seemingly benign details can be stitched into detailed profiles by cybercriminals or data brokers – fuelling targeted phishing, identity theft, and sophisticated social engineering. A surge of niche platforms, a bunch of new risks Adding fuel to the fire is the rapid proliferation of niche AI platforms. Tools for generating product mock-ups, social posts, songs, resumes, or legalese are sprouting up at speed – many of them developed by small teams using open-source foundation models. While these platforms may be brilliant at what they do, they may not offer the hardened security architecture of enterprise-grade tools. 'Smaller apps are less likely to have been tested for edge-case privacy violations or undergone rigorous penetration tests and security audits,' says Collard. 'And many have opaque or permissive data usage policies.' Even if an app's creators have no malicious intent, weak oversight can lead to major leaks. Collard warns that user data could end up in: ● Third-party data broker databases ● AI training sets without consent ● Cybercriminal marketplaces following a breach In some cases, the apps might themselves be fronts for data-harvesting operations. From individual oversights to corporate exposure The consequences of oversharing aren't limited to the person typing the prompt. 'When employees feed confidential information into public GenAI tools, they can inadvertently expose their entire company,' ( explains Collard. 'That includes client data, internal operations, product strategies – things that competitors, attackers, or regulators would care deeply about.' While unauthorised shadow AI remains a major concern, the rise of semi-shadow AI – paid tools adopted by business units without IT oversight – is increasingly risky, with free-tier generative AI apps like ChatGPT responsible for 54% of sensitive data leaks due to permissive licensing and lack of controls, according to the Harmonic report. So, what's the solution? Responsible adoption starts with understanding the risk – and reining in the hype. 'Businesses must train their employees on which tools are ok to use, and what's safe to input and what isn't," says Collard. 'And they should implement real safeguards – not just policies on paper. 'Cyber hygiene now includes AI hygiene.' 'This should include restricting access to generative AI tools without oversight or only allowing those approved by the company.' 'Organisations need to adopt a privacy-by-design approach ( when it comes to AI adoption,' she says. 'This includes only using AI platforms with enterprise-level data controls and deploying browser extensions that detect and block sensitive data from being entered.' As a further safeguard, she believes internal compliance programmes should align AI use with both data protection laws and ethical standards. 'I would strongly recommend companies adopt ISO/IEC 42001 ( an international standard that specifies requirements for establishing, implementing, maintaining and continually improving an Artificial Intelligence Management System (AIMS),' she urges. Ultimately, by balancing productivity gains with the need for data privacy and maintaining customer trust, companies can succeed in adopting AI responsibly. As businesses race to adopt these tools to drive productivity, that balance – between 'wow' and 'whoa' – has never been more crucial. Distributed by APO Group on behalf of KnowBe4.

The Digital Divide's Dark Side: Cybersecurity in African Higher Education (By Anna Collard)
The Digital Divide's Dark Side: Cybersecurity in African Higher Education (By Anna Collard)

Zawya

time19-05-2025

  • Zawya

The Digital Divide's Dark Side: Cybersecurity in African Higher Education (By Anna Collard)

By Anna Collard, SVP Content Strategy&Evangelist KnowBe4 Africa ( The digital revolution is transforming African education, with universities embracing online learning and digital systems. However, this progress brings a crucial challenge: cybersecurity. Are African higher education institutions (HEIs) prepared for the escalating cyber threats? The Growing Threat Landscape African HEIs are increasingly targeted by cybercriminals. Microsoft's Cyber Signals report highlights education as the third most targeted sector globally ( with Africa being a particularly vulnerable region. Incidents like the theft of sensitive data ( at Tshwane University of Technology (TUT) and the hacking of a master's degree platform ( at Abdelmalek Essaadi University in Morocco demonstrate the reality of these threats. Several factors contribute to HEI vulnerability. Universities hold vast amounts of sensitive data, including student records, research, and intellectual property. Their open nature, with diverse users and international collaborations, creates weaknesses, especially in email systems. Limited resources, legacy systems, and a lack of awareness further exacerbate these issues. Examples of Cyber Threats in African Education Educational institutions have fallen prey to social engineering and spoofing attacks. For example, universities in Mpumalanga and schools in the Eastern Cape have been notably victimised by cybercriminals ( using link-based ransomware attacks, with some institutions being locked out of their data for over a year. Earlier this year, the KwaZulu-Natal Department of Education warned against a cybercriminal scamming job seekers ( by falsely promising teaching posts in exchange for money and using photos with officials to appear legitimate. Strategies for Strengthening Cybersecurity African HEIs can take actionable steps to strengthen their cyber defenses: Establish Clear Policies: Define roles, responsibilities, and data security protocols Provide Regular Training: Educate educators, administrators, and students to improve cyber hygiene and security culture Implement Secure Access Management: Enforce multi-factor authentication (MFA) and secure login practices Invest in Secure Technology Infrastructure: Include encrypted data storage, secure internet connections, and reliable software updates Leverage AI and Advanced Technologies: AI can be utilised to enhance threat detection and enable real-time responses. Consider centralising tech setups for better monitoring Adopt Comprehensive Cybersecurity Frameworks: Follow guidelines like those from the National Institute of Standards and Technology (NIST) and encourage phishing-resistant MFA, reducing hacking risks by over 99.9% Human Risk Management as a Priority: Focus on security awareness training, that includes simulated phishing, and real-time interventions to change behaviour and mitigate human risk Moving Forward The cybersecurity challenges facing African HEIs are significant but not impossible. By adopting a human risk approach and acknowledging threats, implementing strong security measures, and fostering a positive security culture, we can protect institutions and ensure a secure digital learning environment. A collective effort involving institutions, governments, cybersecurity experts, and technology providers is crucial to safeguard the future of education in Africa. As part of efforts to strengthen cybersecurity awareness in the education sector, KnowBe4 offers a Student Edition—a version of its platform tailored to the unique needs of educational institutions, providing age-appropriate, relevant security content and training solutions. This initiative is guided by an Advisory Council of global universities, including Nelson Mandela University in South Africa, ensuring the content remains practical, culturally relevant, and aligned with the realities of student life. Distributed by APO Group on behalf of KnowBe4.

KnowBe4 Named GCC's 2025 Best Workplace In Technology
KnowBe4 Named GCC's 2025 Best Workplace In Technology

Channel Post MEA

time18-04-2025

  • Channel Post MEA

KnowBe4 Named GCC's 2025 Best Workplace In Technology

KnowBe4 has announced that, for the second year in a row, it has been named a Best Workplace in Technology in the GCC countries for 2025 by Great Place to Work. Joining the ranks of globally recognised employers, this award celebrates KnowBe4's unwavering commitment to exceptional company culture. Ranking 14th, KnowBe4's recognition on the Best Places to Work List in Technology reflects its commitment to a culture of radical transparency, extreme ownership, and continuous professional growth. The Dubai team is leading the way in shaping the security awareness industry in the GCC, combining the agility of a start-up with the strength of a global organisation. This recognition reaffirms KnowBe4's dedication to empowering its people and fostering a workplace where innovation, engagement, and success go hand in hand. 'At KnowBe4, our people are the driving force behind our success,' says Ani Banerjee, chief human resources officer at KnowBe4. 'Receiving this award is a testament to our dedication to fostering a workplace where employees feel valued, supported, and empowered to grow. Through continuous investment in professional development, top-tier training programs, and impactful benefits such as tuition reimbursement and certification bonuses, we are committed to equipping our team with the resources they need to thrive.' As the leading authority on workplace culture, Great Place to Work centers their methodology around a globally recognised framework that measures employees' experience of trust, pride and enjoyment within their organisation. Surveying employees across countries in the GCC, this list determines the top companies in that area that foster a positive and inclusive workplace culture. 0 0

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store