13 hours ago
Commentary: Social media regulation should protect users, not push them out
SINGAPORE: Across the Asia Pacific, governments are tightening the rules around who gets to use social media.
In Vietnam, users must now verify their accounts with a national ID or local phone number under Decree 147. Malaysia in January began requiring social media platforms to obtain operating licences. Indonesia is considering a minimum age of 18, while Australia has already banned children under 16.
These aren't just rules about what you can post. They are rules about who gets to participate.
The shift is subtle but significant, going from regulating content to regulating access. Whether you can participate now is increasingly about fitting into the right category – by age, by location, by documentation – not just about how you behave online.
In this climate, it does not take much for caution to harden into restriction. And when that happens, platforms might stop being open spaces. They start becoming exclusionary systems that pre-emptively screen users out before anything even happens.
WHAT HAPPENS WHEN WE OVERPROTECT
Blanket restrictions look decisive, but they often miss the mark. Blocking the young, the anonymous or the vulnerable does not always lead to safety. It often results in exclusion, silence or migration to platforms with weaker rules and safeguards.
Australia's under-16 ban has drawn global attention. But it's still too early to know whether it's working as intended. Will it protect children, or merely push them towards less regulated corners of the internet? These are questions we need to ask before more countries follow suit.
Sonia Livingstone, a UK scholar of digital literacy and youth technology use, has long warned against protection that turns into exclusion. Young people have a right to be in digital spaces – safely, yes, but meaningfully too. And that principle applies more broadly: Exclusion by legislation doesn't just affect teens. It cuts off anyone who won't – or can't – verify their identity on demand.
The truth is, anonymity poses challenges but it's not the only issue. Accountability is another. At SG Her Empowerment, we've supported victims whose intimate images were shared by both anonymous users and known individuals. In both cases, the ecosystem struggled to respond to prevent or mitigate harm.
When perpetrators can slip between accounts or disappear altogether, it becomes harder to trace, report and stop the abuse. Anonymity can make that harder. But the deeper issue is whether our systems are built to hold anyone – visible or not – to account.
VISIBILITY IS NOT VIRTUE
We are drifting towards a global system that increasingly treats visibility as virtue and invisibility as risk. That is a misconception. Whistleblowers, survivors and marginalised communities often need anonymity to speak freely and safely. And it's not just about safety.
Anonymity also nurtures creativity, experimentation and candid self-expression – ways of thinking, expressing or deliberating that are not always possible when every action is tied back to a name, job or family. Not everyone posting is trying to hide nefarious deeds. Some are just trying to grow – without the cost of getting it wrong in public. A healthy digital space must ensure room for that too.
The countries in the region seem to be edging towards digital systems that assume users should be screened before they can participate. In these jurisdictions, ID, location and traceability are fast becoming the price of entry to online social spaces. That might make enforcement easier, but it narrows the space for meaningful interaction.
PRECISION SAFETY
When safety is enforced at the door, the burden shifts away from where it matters most: how systems respond when things go wrong.
To be clear, a safer internet is not just one with fewer bad actors. It's one where harm is taken seriously, where victims are supported and where platforms are held accountable. That requires more than just gatekeeping – it requires a redesign of social media systems to ensure they can respond to failures and hold up under pressure.
Singapore's model has been lauded as frontrunning but is nonetheless still evolving. While early legislation like the Protection from Online Falsehoods and Manipulation Act (better known as POFMA) raised concerns about its scope and ministerial discretion, it was designed to issue correction directions for falsehoods post-publication, not to impose blanket restrictions to social media platforms or services.
The Online Safety (Miscellaneous Amendments) Act 2022 expanded regulatory powers further to direct platforms to remove or block access to egregious content such as child sexual exploitation, suicide promotion and incitement to violence. Still, it left room for ambiguity – especially around harms that fall outside these categories, including non-consensual distribution of sexual content, targeted harassment, or content promoting dangerous behaviours.
The next step is the Online Safety (Relief and Accountability) Bill. Once passed, it will establish a dedicated Online Safety Commission in 2026. It will also give regulators the authority to request user identity information – but only when serious harm has occurred and legally defined thresholds are met.
In this case, identity disclosure is not the starting point. Instead, the focus is on harm-based disclosure: targeted, post-incident and justified.
REGULATING USERS IS ONLY PART OF THE PICTURE
Governments are leaning on what they know: identity checks, age gates, device and user verification. These are easy to understand and relatively easier to enact. They show immediate action and are often framed as part of a broader effort to protect minors and rein in perpetrators who exploit anonymity to evade detection or accountability. But they don't get to the root of the problem.
Why do social media algorithms keep pushing content that distress, provoke or trigger users? Why are reporting avenues and mechanisms buried under three layers of menus? Why are some platforms better at responding to harm than others, even with the same risks?
Real trust does not come from mere gatekeeping, but by ensuring platforms behave predictably when things go wrong. That means robust reporting tools, responsive moderation and interface designs that prioritise user safety and well-being.
Right now, most of that isn't happening. On many social media platforms, algorithms still reward emotional extremes. Autoplay and endless scroll are still the default. Reporting tools are scattered, inconsistent and underpowered.
GETTING IT RIGHT BEFORE IT CLOSES IN
Regulation is necessary. But it has to be targeted, not sweeping; responsive to real harm, not preoccupied with suspicion. Safeguards must aim to protect users of social media from potential dangers, not to protect social media platforms from users.
Will the current surge of regulation eventually make social media unusable? Perhaps not – but it will certainly make it overwhelmingly more conditional. The question we must ask ourselves is: Conditional on what? Identity? Risk profiles?
If we get this wrong, we won't just be regulating platforms – we'll be deciding who gets to belong in the digital world we're building next.
That's a decision worth getting right.