logo
What's Coming Up - Insight On Why Some Young Indonesians Want Out of Their Country

What's Coming Up - Insight On Why Some Young Indonesians Want Out of Their Country

CNA10 hours ago

This year, the hashtag #KaburAjaDulu trended on Indonesian social media. It means "Just Run Away First", a call to young Indonesians to leave the country. Insight finds out what is behind their discontent.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Indonesia signs wiretapping pacts with telco operators; analysts flag privacy concerns
Indonesia signs wiretapping pacts with telco operators; analysts flag privacy concerns

Business Times

time2 hours ago

  • Business Times

Indonesia signs wiretapping pacts with telco operators; analysts flag privacy concerns

[JAKARTA] Indonesia's Attorney General Office has signed an agreement with four telecommunication operators to install wiretapping devices, an official from the Office said, raising questions among analysts about the potential impact on privacy and surveillance. The agreement, signed on Jun 24, would allow prosecutors to access telecommunication recordings and enable data exchange for law enforcement purposes, the Attorney General Office spokesperson Harli Siregar told Reuters on Thursday (Jun 26). 'We have many fugitives and need technology to detect them,' Siregar said, referring to the agreement signed with the country's largest telco company Telekomunikasi Indonesia and its unit Telekomunikasi Selular, as well as two other companies Indosat, and XLSMART Telecom Sejahtera. The pacts, which would include mobile phones, are in accordance with a law passed in 2021 giving wiretapping authority to the Attorney General Office, Siregar added. Indonesia's police and anti-graft agency are already able to use wiretapping, Wahyudi Djafar, an analyst focused on digital governance and public policy told Reuters. But he said the new arrangement with the Attorney General Office could allow prosecutors to use surveillance even on the grounds of suspicion without formal charges or legally named suspects in an investigation. A NEWSLETTER FOR YOU Friday, 8.30 am Asean Business Business insights centering on South-east Asia's fast-growing economies. Sign Up Sign Up Djafar, who is the Public Policy Director at Rakhsa Initiatives, an Indonesia-based think tank focused on digital governance and strategic security issues, said he feared the agreement could potentially widen the scope of wiretapping and lead to mass surveillance. 'There is no clear limitation on how the wiretap will be conducted and for how long and who can use the data,' he said, adding 'the (AGO) office's wiretapping power will be stronger than the police and anti-graft agency.' The Attorney General Office spokesperson Siregar, responding to the privacy concerns, said the office will only wiretap fugitives. When asked about the extent of the wiretapping powers, Siregar said the act would 'not be done arbitrarily.' Damar Juniarto, a board member at global rights group Amnesty International in Indonesia, said the wiretapping agreements would mean more state agencies doing surveillance, potentially further threatening civil liberties. Indonesia's Presidential Communication Office did not immediately respond to a request for comment regarding the concerns about the impact of wiretapping laws on civil liberties. Merza Fachys, a director at XLSMART, one of the telco companies, told Reuters that the Attorney General Office is one of the state agencies allowed to wiretap, and ensures customer data would be safe. A data protection law, passed in 2022, imposes corporate fines for mishandling customers' data. The biggest fine is 2 per cent of a corporation's annual revenue and could see their assets confiscated or auctioned off. REUTERS

Commentary: Social media regulation should protect users, not push them out
Commentary: Social media regulation should protect users, not push them out

CNA

time16 hours ago

  • CNA

Commentary: Social media regulation should protect users, not push them out

SINGAPORE: Across the Asia Pacific, governments are tightening the rules around who gets to use social media. In Vietnam, users must now verify their accounts with a national ID or local phone number under Decree 147. Malaysia in January began requiring social media platforms to obtain operating licences. Indonesia is considering a minimum age of 18, while Australia has already banned children under 16. These aren't just rules about what you can post. They are rules about who gets to participate. The shift is subtle but significant, going from regulating content to regulating access. Whether you can participate now is increasingly about fitting into the right category – by age, by location, by documentation – not just about how you behave online. In this climate, it does not take much for caution to harden into restriction. And when that happens, platforms might stop being open spaces. They start becoming exclusionary systems that pre-emptively screen users out before anything even happens. WHAT HAPPENS WHEN WE OVERPROTECT Blanket restrictions look decisive, but they often miss the mark. Blocking the young, the anonymous or the vulnerable does not always lead to safety. It often results in exclusion, silence or migration to platforms with weaker rules and safeguards. Australia's under-16 ban has drawn global attention. But it's still too early to know whether it's working as intended. Will it protect children, or merely push them towards less regulated corners of the internet? These are questions we need to ask before more countries follow suit. Sonia Livingstone, a UK scholar of digital literacy and youth technology use, has long warned against protection that turns into exclusion. Young people have a right to be in digital spaces – safely, yes, but meaningfully too. And that principle applies more broadly: Exclusion by legislation doesn't just affect teens. It cuts off anyone who won't – or can't – verify their identity on demand. The truth is, anonymity poses challenges but it's not the only issue. Accountability is another. At SG Her Empowerment, we've supported victims whose intimate images were shared by both anonymous users and known individuals. In both cases, the ecosystem struggled to respond to prevent or mitigate harm. When perpetrators can slip between accounts or disappear altogether, it becomes harder to trace, report and stop the abuse. Anonymity can make that harder. But the deeper issue is whether our systems are built to hold anyone – visible or not – to account. VISIBILITY IS NOT VIRTUE We are drifting towards a global system that increasingly treats visibility as virtue and invisibility as risk. That is a misconception. Whistleblowers, survivors and marginalised communities often need anonymity to speak freely and safely. And it's not just about safety. Anonymity also nurtures creativity, experimentation and candid self-expression – ways of thinking, expressing or deliberating that are not always possible when every action is tied back to a name, job or family. Not everyone posting is trying to hide nefarious deeds. Some are just trying to grow – without the cost of getting it wrong in public. A healthy digital space must ensure room for that too. The countries in the region seem to be edging towards digital systems that assume users should be screened before they can participate. In these jurisdictions, ID, location and traceability are fast becoming the price of entry to online social spaces. That might make enforcement easier, but it narrows the space for meaningful interaction. PRECISION SAFETY When safety is enforced at the door, the burden shifts away from where it matters most: how systems respond when things go wrong. To be clear, a safer internet is not just one with fewer bad actors. It's one where harm is taken seriously, where victims are supported and where platforms are held accountable. That requires more than just gatekeeping – it requires a redesign of social media systems to ensure they can respond to failures and hold up under pressure. Singapore's model has been lauded as frontrunning but is nonetheless still evolving. While early legislation like the Protection from Online Falsehoods and Manipulation Act (better known as POFMA) raised concerns about its scope and ministerial discretion, it was designed to issue correction directions for falsehoods post-publication, not to impose blanket restrictions to social media platforms or services. The Online Safety (Miscellaneous Amendments) Act 2022 expanded regulatory powers further to direct platforms to remove or block access to egregious content such as child sexual exploitation, suicide promotion and incitement to violence. Still, it left room for ambiguity – especially around harms that fall outside these categories, including non-consensual distribution of sexual content, targeted harassment, or content promoting dangerous behaviours. The next step is the Online Safety (Relief and Accountability) Bill. Once passed, it will establish a dedicated Online Safety Commission in 2026. It will also give regulators the authority to request user identity information – but only when serious harm has occurred and legally defined thresholds are met. In this case, identity disclosure is not the starting point. Instead, the focus is on harm-based disclosure: targeted, post-incident and justified. REGULATING USERS IS ONLY PART OF THE PICTURE Governments are leaning on what they know: identity checks, age gates, device and user verification. These are easy to understand and relatively easier to enact. They show immediate action and are often framed as part of a broader effort to protect minors and rein in perpetrators who exploit anonymity to evade detection or accountability. But they don't get to the root of the problem. Why do social media algorithms keep pushing content that distress, provoke or trigger users? Why are reporting avenues and mechanisms buried under three layers of menus? Why are some platforms better at responding to harm than others, even with the same risks? Real trust does not come from mere gatekeeping, but by ensuring platforms behave predictably when things go wrong. That means robust reporting tools, responsive moderation and interface designs that prioritise user safety and well-being. Right now, most of that isn't happening. On many social media platforms, algorithms still reward emotional extremes. Autoplay and endless scroll are still the default. Reporting tools are scattered, inconsistent and underpowered. GETTING IT RIGHT BEFORE IT CLOSES IN Regulation is necessary. But it has to be targeted, not sweeping; responsive to real harm, not preoccupied with suspicion. Safeguards must aim to protect users of social media from potential dangers, not to protect social media platforms from users. Will the current surge of regulation eventually make social media unusable? Perhaps not – but it will certainly make it overwhelmingly more conditional. The question we must ask ourselves is: Conditional on what? Identity? Risk profiles? If we get this wrong, we won't just be regulating platforms – we'll be deciding who gets to belong in the digital world we're building next. That's a decision worth getting right.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store