logo
#

Latest news with #TheSoutheastAsiaRegionalCentreforCounter-Terrorism

The silence of 'digital bystanders' amplifies hate online
The silence of 'digital bystanders' amplifies hate online

New Straits Times

time09-07-2025

  • Politics
  • New Straits Times

The silence of 'digital bystanders' amplifies hate online

IN my work at the intersection of content regulation and media ethics, I've come to believe that the most dangerous element in an online space is not always the person posting hateful content. It's the silence that surrounds it. To the algorithms curating our online experiences, silence on harmful content suggests engagement or approval. No reports, no objections, no friction? It must be fine. And to people within those digital communities, silence can feel like social permission. If no one's saying anything, maybe it isn't that bad. Maybe it's even true. This is how hate gets normalised — not with a bang, but with a shrug. The phenomenon is known as the digital version of the bystander effect — where the more people witness harm, the less likely any one person is to intervene. Online, that passivity is multiplied and masked by anonymity. Research shows that in extremist chat groups, more than 80 per cent of users never post anything themselves. They don't initiate hate, but they're there — watching, clicking, sharing. In doing so, they help keep those ecosystems alive. In Malaysia, research by The Southeast Asia Regional Centre for Counter-Terrorism has found that radical content spreads fastest not on public platforms, but in private or semi-private spaces — unmoderated Telegram groups, closed chat rooms and fringe forums. These aren't necessarily echo chambers of hardened extremists. They're often filled with regular people; friends, colleagues, acquaintances who may disagree with what they see, but stay silent nonetheless. It's no surprise then that many countries have turned to deplatforming as a response. Takedowns, content moderation, algorithm tweaks — essential tools in any regulatory arsenal. But they're not silver bullets. And, without thoughtful communication, they can backfire. Remove extremist content too swiftly, and its creators are recast as martyrs. Ban a channel without explanation, and you leave behind a vacuum that quickly fills with conspiracy theories. Kick bad actors off a platform, they don't disappear — they just migrate to more opaque, harder-to-monitor spaces. Simply removing content doesn't remove its influence. This is where counterspeech becomes not just useful, but necessary. Counterspeech means responding to harmful content with facts, empathy, questions or alternative narratives. It works best when it's fast, authentic and comes from peers, not just authorities. The idea is not to out-shout hate, but to interrupt it — early, calmly and effectively. And there's data to back it up. In Sri Lanka, a PeaceTech Lab pilot found that engagement with hate content dropped by 46 per cent when counterspeech was introduced early. In Germany, civic volunteers who replied to hateful YouTube comments within the first hour helped reduce hate-driven threads by 17 per cent. These aren't massive interventions. They're small, consistent disruptions, and they matter. At the Content Forum, we're building on this idea in many of our initiatives — from suicide content guidelines to training with influencers, and in ongoing efforts to educate children and parents about digital friction and media literacy. The goal is to give more people the tools and confidence to speak up before harm escalates. Still, the question remains: why don't more people do it? In every training I run, I hear the same three reasons: "I don't want to be attacked", "It's not my place", and "It won't make a difference". Truth is, counterspeech doesn't require you to win the argument. You don't need to craft the perfect reply or go viral with your response. You just need to say something. Even a simple, "Are we sure this is okay?" is enough to interrupt the flow. It breaks the momentum. It breaches the echo chamber, and often, that's all it takes. Whether we realise it or not, silence online isn't just abstinence — it's influence. It tells the algorithm: this is fine. It tells the community: no one minds. It tells the extremist: no one will stop you. But that doesn't have to be the message we keep sending. If harmful beliefs thrive in silence, then perhaps disruption begins not with noise or outrage, but with clarity, courage and consistency. We don't need everyone to be loud, social media is loud enough as it is! We just need more people to stop being quiet.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store