logo
#

Latest news with #AnnaCollard

Digital gossip: When WhatsApp groups become cyber-risk zones
Digital gossip: When WhatsApp groups become cyber-risk zones

The Citizen

time6 days ago

  • Business
  • The Citizen

Digital gossip: When WhatsApp groups become cyber-risk zones

93% of African respondents use WhatsApp for work communications, surpassing email and Microsoft Teams. Despite their popularity among employees, informal messaging platforms pose significant risks to organisations' cybersecurity. This is according to Anna Collard, senior vice president of Content Strategy and Evangelist at KnowBe4 Africa. WhatsApp According to the 2025 KnowBe4 Africa Annual Cybersecurity survey, 93% of African respondents use WhatsApp for work communications, surpassing email and Microsoft Teams. 'For many organisations, platforms like WhatsApp and Telegram have become integral to workplace communication. Ease of use is what makes them so popular,' explains Collard. 'Particularly on the continent, many people prefer WhatsApp because it's fast, familiar and frictionless. These apps are already on our phones and embedded in our daily routines.' Convenience at cost Collard says while it feels natural to ping a colleague on WhatsApp, especially if you're trying to get a quick answer, convenience often comes at the cost of control and compliance. In the US, a top-secret military attack on Yemen was leaked on the messaging platform Signal earlier this year, with the plan inadvertently shared with a newspaper editor and other civilians, including the Defence Secretary's wife and brother. 'There are multiple layers of risk,' states Collard. 'It's important to remember that WhatsApp wasn't built for internal corporate use, but as a consumer tool. Because of that, it doesn't have the same business-level and privacy controls embedded in it that an enterprise communication tool, such as Microsoft Teams or Slack, would have.' ALSO READ: South Africa remains a global hotspot for data breaches Data leakage Collard explains that the biggest risk for organisations is data leakage. 'Accidental or intentional sharing of confidential information, such as client details, financial figures, internal strategies or login credentials, on informal groups can have disastrous consequences. 'Informal platforms lack the audit trails necessary for compliance with regulations, particularly in industries like finance with strict data-handling requirements,' she said. Identity theft She said phishing and identity theft are also threats. 'Attackers love platforms where identity verification is weak,' she says, adding that at least 10 people in her personal network have reported being victims of WhatsApp impersonation and takeover scams. 'Once the scammer gains access to the account, in many cases via SIM swaps, the real user is locked out, and they have access to all their previous communications, contacts and files,' she comments. 'They then impersonate the victim to deceive their contacts, often asking for money or even more personal information.' ALSO READ: SA's Treasury discovers malware as hackers exploit Microsoft flaw Mitigating risks She explained that beyond security, using these channels can also lead to inappropriate communication among employees or the blurring of work-life boundaries, resulting in burnout. ' Collard said that for organisations wanting to mitigate these risks, it's important to set up a clear communications strategy. 'First, provide secure alternatives. Don't just tell people what not to use. Make sure that tools like Teams or Slack are easy to access and clearly endorsed.' Collard said it is also vital to educate employees on why secure communication matters. 'This training should include digital mindfulness principles, such as to pause before sending, think about what you're sharing and with whom, and be alert to emotional triggers like urgency or fear, as these are common tactics in social engineering attacks.' Collard said by introducing approved communication tools, organisations can benefit from additional security features, such as audit logs, data protection, access control and integration with other business tools. 'Using approved platforms helps maintain healthy boundaries, so work doesn't creep into every corner of your personal life. It's about digital wellbeing as much as it is about cybersecurity.' Collard maintains that while informal messaging offers convenience, its unchecked use introduces significant cyber risks, saying organisations must move beyond simply acknowledging the problem and proactively implement clear policies, provide secure alternatives, and empower employees with the digital mindfulness needed to navigate these cyber-risk zones safely. ALSO READ: Data breaches cost SA organisations over R360m in 3 years

Chats, hacks and cyber traps: When WhatsApp groups become serious cyber-risk zones
Chats, hacks and cyber traps: When WhatsApp groups become serious cyber-risk zones

IOL News

time7 days ago

  • Business
  • IOL News

Chats, hacks and cyber traps: When WhatsApp groups become serious cyber-risk zones

The cybersecurity risks of informal messaging platforms in the workplace Image: Supplied In the ever-evolving landscape of workplace communication, the convenience and familiarity of informal messaging platforms like WhatsApp and Telegram have become indispensable tools for many organisations. However, their widespread popularity among employees raises significant concerns related to cybersecurity, as highlighted by the 2025 KnowBe4 Africa Annual Cybersecurity Survey. The findings reveal that an overwhelming 93% of African respondents utilise WhatsApp for work communications, eclipsing traditional email and even Microsoft Teams. But what can organisations do to safeguard themselves against potential data leakage and other evolving threats? According to Anna Collard, Senior Vice President of Content Strategy and Evangelist at KnowBe4 Africa, the comfort of using these applications is a driving force behind their integration in workplaces. 'Particularly on the continent, many people prefer WhatsApp because it's fast, familiar, and frictionless,' she explains. In today's hybrid work environment, where collaboration is key, these platforms provide a quick and effective means for employees to connect. 'It feels natural to ping a colleague on WhatsApp, especially if you're trying to get a fast answer,' she adds. However, the convenience of informal platforms can lead to detrimental risks regarding control and compliance. Informal messaging, formal risks Recent incidents have illuminated the dangers associated with using these informal channels for professional communications. Notably, WhatsApp messages have been cited as evidence in employee tribunals, indicating the gravity of what can transpire in a seemingly harmless chat. The British bank NatWest has taken the bold step of banning WhatsApp communications among its staff, signalling a growing recognition of the associated perils. Furthermore, the alarming leak of a US military operation's details via Signal, an informal messaging app, underlines how these platforms can pose threats beyond the corporate realm. Collard points out that informal messaging apps were not designed with corporate usage in mind and lack essential privacy and business-level controls found in more secure tools like Microsoft Teams or Slack. 'Organisations face multiple layers of risk,' she warns. The spectre of data leakage stands at the forefront, with accidental or intentional sharing of sensitive information, such as client details and financial data, threatening to devastate corporate integrity and client trust. 'It's also completely beyond the organisation's control, creating a shadow IT problem,' she notes. Alarmingly, the 2025 survey revealed that 80% of respondents rely on personal devices for work, many of which remain unmanaged, ultimately creating significant blind spots for organisations. Additionally, the absence of an audit trail on these platforms can jeopardise compliance with industry-specific regulations. This is particularly relevant to sectors such as finance, where meticulous data handling is obligatory. Coupled with vulnerabilities to phishing and identity theft—where criminals exploit weak identity verification on these platforms—organisations find themselves in precarious territory. As Collard observes, numerous individuals have fallen prey to WhatsApp impersonation scams, with attackers capitalising on an unsuspecting user's compromised account to manipulate their contacts. This concern extends beyond mere security threats; the informal use of messaging platforms can also lead to inappropriate employee interactions and blur the boundaries between professional and personal life, contributing to workplace burnout. 'A constant stream of messages can disrupt focus and ultimately lower productivity,' claims Collard. Having the right guardrails in place To mitigate these risks, it is crucial for organisations to establish clear communication strategies. 'First, provide secure alternatives,' Collard advises. Rather than merely prohibiting the use of informal tools, businesses should make access to secure platforms like Teams or Slack simple and accessible. Furthermore, employee education is paramount. This training should encompass the significance of secure communication, focusing on digital mindfulness principles—encouraging employees to pause and consider what they are sharing, their intended recipients, and to remain vigilant against emotional triggers such as urgency, which are often exploited in social engineering attacks. Cultivating a culture of psychological safety is essential, allowing employees to feel empowered to question odd requests, even if they originate from higher-ups. Introducing approved communication tools can also enhance security features, incorporating capabilities such as audit logs, data protection, and access control. These secure platforms foster healthier communication practices, allowing employees to schedule messages and set availability statuses, thereby preserving work-life boundaries and enhancing overall digital wellbeing. In conclusion, while informal messaging platforms provide enticing convenience, their unchecked utilisation can usher in significant cybersecurity risks. As Collard underscores, organisations must transcend mere acknowledgment of the issue and proactively implement robust policies, offer secure alternatives, and empower employees with the digital mindfulness necessary to safely navigate these treacherous cyber landscapes. IOL

Business-critical mails in spam folders: Why real emails look fake now
Business-critical mails in spam folders: Why real emails look fake now

Zawya

time30-06-2025

  • Business
  • Zawya

Business-critical mails in spam folders: Why real emails look fake now

In the fight against phishing, forward-thinking organisations are winning. But there's a twist. The heightened vigilance that has empowered employees to detect suspicious emails is now creating a new dilemma: legitimate, business-critical messages are being flagged, ignored, or buried in spam folders. And in today's AI-fuelled cyber landscape, that reaction may be as justified as it is damaging. Phishing works and it's reshaping trust The release of generative AI tools has supercharged phishing attempts. KnowBe4's Phishing Threat Trend Report 2025 ( shows that more than 80% ( of the analysed phishing emails were augmented by AI, and they're far more convincing than before. 'The gut-check we used to rely on has been gamed – and even the large language models now being explored to help detect suspicious emails are also struggling,' says Anna Collard, SVP of Content Strategy&Evangelist at KnowBe4 Africa. 'They're forced to dig deeper, assessing tone, context, and subtler red flags.' The result? Suspicion is now the default And it's not unwarranted. Maturing cybersecurity awareness and phishing simulation programs have helped sharpen employees' scepticism ( But this success has revealed a new problem: overcorrection. Emails that are real – from HR, IT, legal, or sales – are now increasingly being misjudged. In some cases, they're wrongly flagged as phishing by either people or systems. In others, they're simply ignored. The irony is that some of the most common and legitimate corporate communication traits are now the very ones that raise red flags: Urgency: 'Sign this by COB today'; or when every email from a colleague is marked 'urgent' Unexpected senders: e.g. HR tools or SaaS platforms Calls to action: 'Click here to confirm' Stylistic quirks: overly polished copy, too many links or bold phrases Tech misalignments: emails from legitimate senders failing DMARC or DKIM checks 'Even just using a third-party sender domain can cause confusion,' says Collard. 'If staff don't expect it – or don't recognise the platform – the message can get flagged.' For good reason too, as according to KnowBe4's Phishing Threat Trend Report ( the top 5 legitimate platforms used to send out phishing emails include popular business tools such as DocuSign, Paypal, Microsoft, Google Drive, and Salesforce. The cost of false positives When real emails get sidelined, the impact is more than a missed message. Delayed IT updates, ignored HR deadlines, and lost sales opportunities can create serious ripple effects across operations. Deliverability issues also erode trust. And in high-stakes environments like healthcare, legal services or finance, false positives can become costly very quickly. So, how do you write emails that get read – not flagged? To combat this growing challenge, organisations need to stop thinking of phishing risk as purely a recipient problem. Legitimate internal emails need to look legitimate too. Here's how every team – from HR to IT to marketing – can write more trustworthy emails: Write Like a Human, Deliver Like a Pro Subject lines should set expectations Use clear, predictable language. Instead of 'IMPORTANT: Read this now!', try 'Reminder: Benefits enrollment closes Friday'. Lead with context before asking for action Start with a reference point: 'You recently submitted a travel claim...' or 'As part of your onboarding...'. Limit urgency to what's truly urgent Too many 'ASAP's will breed indifference. Use urgency sparingly – and explain why it matters. Remember: If everything is urgent; nothing is. Minimise links and avoid vague CTAs Avoid phrases like 'click here' or hyperlinking whole sentences. Provide a fallback path: 'Or log into your dashboard directly ( Be cautious with tone and formatting Avoid shouty subject lines, gimmicky language, or inconsistent formatting that can trigger filters. Test before sending Run your email through spam-filter testing tools to see what might flag it ( or Get your digital paperwork in order Even the best-written email may never reach its recipient if your authentication protocols aren't properly configured. SPF, DKIM, and DMARC are three essential technical settings that help prove your email really came from your domain. SPF tells email providers which servers are allowed to send emails using your domain name — helping stop spammers from pretending to be you. DKIM adds a digital signature to your emails to prove they really came from you and weren't changed along the way. DMARC brings SPF and DKIM together by setting rules for what to do with suspicious emails (like send them to spam or block them) and sends reports to your IT team so they can spot abuse. 'These protocols are a bit like a digital passport,' Collard explains. 'Without them, even a genuine email may not make it through.' But even technically sound emails can fall flat if they don't look legitimate to the reader. That's why it's just as important to consider how your internal teams craft and send messages. Internal brand security: don't just train recipients – train senders too Cyber awareness is often focused on detection. But to maintain deliverability and trust, sender behaviour matters too. Teach teams to avoid accidental red flags. Share templates and subject line guides. And ensure that employees – especially those sending to large groups – understand the basics of trustworthy communication. Consistency is key. Make sure communications come from the same official addresses, follow familiar formats, and maintain a recognizable tone. This teaches recipients what to expect – and what to be cautious of – building a clearer line between legitimate messages and possible fakes. 'This is part of internal brand hygiene,' says Collard. 'When your team consistently communicates clearly and predictably, you build trust over time – with both employees and clients. That trust makes your emails easier to recognise, safer to deliver, and more likely to be opened.' In a world where AI can impersonate your tone and template with ease ( your best defence is to sound like yourself – and help others know what to expect when you speak. Distributed by APO Group on behalf of KnowBe4. Contact details: Anne Dolinschek KnowBe4 Email: anned@ TJ Coenraad Red Ribbon Email: tj@

Social Engineering 2.0: When artificial intelligence becomes the ultimate manipulator
Social Engineering 2.0: When artificial intelligence becomes the ultimate manipulator

Zawya

time16-06-2025

  • Business
  • Zawya

Social Engineering 2.0: When artificial intelligence becomes the ultimate manipulator

Once the domain of elite spies and con artists, social engineering is now in the hands of anyone with an internet connection – and AI is the accomplice. Supercharged by generative tools and deepfake technology, today's social engineering attacks are no longer sloppy phishing attempts. They're targeted, psychologically precise, and frighteningly scalable. Welcome to Social Engineering 2.0, where the manipulators don't need to know you personally. Their AI already does. Deception at machine levels Social engineering works because it bypasses firewalls and technical defences. It attacks human trust. From fake bank alerts to long-lost Nigerian princes, these scams have traditionally relied on generic hooks and low-effort deceit. But that's changed, and continues to. 'AI is augmenting and automating the way social engineering is carried out,' says Anna Collard, SVP of Content Strategy&Evangelist at KnowBe4 Africa. 'Traditional phishing markers like spelling errors or bad grammar are a thing of the past. AI can mimic writing styles, generate emotionally resonant messages, and even recreate voices or faces ( – all within minutes.' The result? Cybercriminals now wield the capabilities of psychological profilers. By scraping publicly available data – from social media to company bios – AI can construct detailed personal dossiers. 'Instead of one-size-fits-all lures, AI enables criminals to create bespoke attacks,' Collard explains. 'It's like giving every scammer access to their own digital intelligence agency.' The new face of manipulation: Deepfakes One of the most chilling evolutions of AI-powered deception is the rise of deepfakes – synthetic video and audio designed to impersonate real people. 'There are documented cases where AI-generated voices have been used to impersonate CEOs and trick staff into wiring millions ( notes Collard. In South Africa, a recent deepfake video circulating on WhatsApp featured a convincingly faked endorsement by FSCA Commissioner Unathi Kamlana promoting a fraudulent trading platform. Nedbank had to publicly distance itself from the scam ( 'We've seen deepfakes used in romance scams, political manipulation, even extortion,' says Collard. One emerging tactic involves simulating a child's voice to convince a parent they've been kidnapped ( – complete with background noise, sobs, and a fake abductor demanding money. 'It's not just deception anymore,' Collard warns. 'It's psychological manipulation at scale.' The Scattered Spider effect One cybercrime group exemplifying this threat is Scattered Spider. Known for its fluency in English and deep understanding of Western corporate culture, this group specialises in highly convincing social engineering campaigns. 'What makes them so effective,' notes Collard, 'is their ability to sound legitimate, form quick rapport, and exploit internal processes – often tricking IT staff or help-desk agents.' Their human-centric approach, amplified by AI tools, such as using audio deepfakes to spoof victims' voices for obtaining initial access, shows how the combination of cultural familiarity, psychological insight, and automation is redefining what cyber threats look like. It's not just about technical access – it's about trust, timing, and manipulation. Social engineering at scale What once required skilled con artists days or weeks of interaction – establishing trust, crafting believable pretexts, and subtly nudging behaviour – can now be done by AI in the blink of an eye. 'AI has industrialised the tactics of social engineering,' says Collard. 'It can perform psychological profiling, identify emotional triggers, and deliver personalised manipulation with unprecedented speed.' The classic stages – reconnaissance, pretexting, rapport-building – are now automated, scalable, and tireless. Unlike human attackers, AI doesn't get sloppy or fatigued; it learns, adapts, and improves with every interaction. The biggest shift? 'No one has to be a high-value target anymore,' Collard explains. 'A receptionist, an HR intern, or a help-desk agent; all may hold the keys to the kingdom. It's not about who you are – it's about what access you have.' Building cognitive resilience In this new terrain, technical solutions alone won't cut it. 'Awareness has to go beyond ' don't click the link,'' says Collard. She advocates for building 'digital mindfulness' and 'cognitive resilience' – the ability to pause, interrogate context, and resist emotional triggers ( This means: Training staff to recognise emotional manipulation, not just suspicious URLs. Running simulations using AI-generated lures, not outdated phishing templates. Rehearsing calm, deliberate decision-making under pressure, to counter panic-based manipulation. Collard recommends unconventional tactics, too. 'Ask HR interviewees to place their hand in front of their face during video calls – it can help spot deepfakes in hiring scams,' she says. Families and teams should also consider pre-agreed code words or secrets for emergency communications, in case AI-generated voices impersonate loved ones. Defence in depth – human and machine While attackers now have AI tools, so too do defenders. Behavioural analytics, real-time content scanning, and anomaly detection systems are evolving rapidly. But Collard warns: 'Technology will never replace critical thinking. The organisations that win will be the ones combining human insight with machine precision.' And with AI lures growing more persuasive, the question is no longer whether you'll be targeted – but whether you'll be prepared. 'This is a race,' Collard concludes. 'But I remain hopeful. If we invest in education, in critical thinking and digital mindfulness, in the discipline of questioning what we see and hear – we'll have a fighting chance.' Distributed by APO Group on behalf of KnowBe4.

Eskom launches AI chatbot 'Alfred' to speed up fault reporting
Eskom launches AI chatbot 'Alfred' to speed up fault reporting

The Citizen

time12-06-2025

  • Business
  • The Citizen

Eskom launches AI chatbot 'Alfred' to speed up fault reporting

Eskom has faced backlash for its lack of service and expediting complaints, which often leaves people in the dark. Eskom has taken a small step into the future, and probably one giant leap, with the launch of Alfred, an innovative artificial intelligence (AI)- driven chatbot designed to enhance and expedite customer service interactions. The parastatal has faced backlash over its lack of service and slow response to complaints, which often leaves people in the dark, angry, and frustrated. What is Alfred for? Eskom aims to utilise Alfred to minimise queues and provide a safer, more efficient experience. Alfred allows customers to report power outages, receive instant reference numbers, and get real-time updates on existing faults, any time of day or night. 'Alfred makes your interactions seamless, fast, socially distanced and safe. 'Utilising artificial intelligence to enhance and speed up customer service, Eskom customers can now report a power loss, get a reference number within seconds and get progress feedback on an existing fault – any time of day or night,' the utility said. ALSO READ: Report reveals alarming collection of data by AI chatbots Where is Alfred? Alfred can be found on Eskom's main page. You can also click on the Chatbot icon on the top menu. Alfred is on WhatsApp on this number 08600 37566. 'Eskom's Alfred is specifically for customers who can use their account or meter number to interact with the chatbot. Once engaged, Alfred allows you to log a power interruption as it happens and provides a reference number for your report. 'This makes it easy to track the progress of faults and stay informed without the need for long queues or phone calls,' Eskom said. Users are advised to provide accurate information when seeking assistance. Chatbots Meanwhile, The Citizen previously reported that chatbots can help diminish long queues and lengthy telephone calls to resolve queries at your bank, municipality, and telephone company. The rise of advanced language models, such as ChatGPT, has ushered in a new era of human-like interactions, where chatbots can engage in natural conversations, solve complex problems, and even exhibit creative thinking. This remarkable progress has opened up a world of possibilities, but it also raises concerns about the reliability and accountability of these systems, Anna Collard, Senior Vice President of Content Strategy and Evangelist at KnowBe4 Africa, has warned. Authentication Collard said that while she likes using chatbots, she will always double-check the original sources when using chatbots for research or to ensure accurate data. Collard added that chatbots handling sensitive transactions, such as banking queries, should authenticate users before accessing or sharing any personal information. ALSO READ: Eskom winter outlook: Here's how many days of load shedding to expect in SA

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store