logo
#

Latest news with #SocialMediaMinimumAge

The Paranoia Of Officialdom: Age Verification And Using The Internet In Australia
The Paranoia Of Officialdom: Age Verification And Using The Internet In Australia

Scoop

time31-07-2025

  • Politics
  • Scoop

The Paranoia Of Officialdom: Age Verification And Using The Internet In Australia

Australia, in keeping with its penal history, has a long record of paranoid officialdom and paternalistic wowsers. Be it perceived threats to morality, the tendency of the populace to be corrupted, and a general, gnawing fear about what knowledge might do, Australia's governing authorities have prized censorship. This recent trend is most conspicuous in an ongoing regulatory war being waged against the Internet and the corporate citizens that inhabit it. Terrified that Australia's tender children will suffer ruination at the hand of online platforms, the entire population of the country will be subjected to age verification checks. Preparations are already underway in the country to impose a social media ban for users under the age of 16, ostensibly to protect the mental health and wellbeing of children. The Online Safety Amendment (Social Media Minimum Age) Bill 2024 was passed in November last year to amend the Online Safety Act 2021, requiring 'age-restricted social media platforms' to observe a 'minimum age obligation' to prevent Australians under the age of 16 to have accounts. It also vests that ghastly office of the eSafety Commissioner and the Information Commissioner with powers to seek information regarding relevant compliance by the platforms, along with the power to issue and publish notices of non-compliance. While the press were falling over to note the significance of such changes, little debate has accompanied the last month's registration of a new industry code by the eSafety Commissioner, Julie Inman Grant. In fact, Inman Grant is proving most busy, having already registered three such codes, with a further six to be registered by the end of this year. All serve to target the behaviour of internet service companies in Australia. All have not been subject to parliamentary debate, let alone broader public consultation. Inman Grant has been less than forthcoming about the implications of these codes, most notably on the issue of mandatory age-assurance limits. That said, some crumbs have been left for those paying attention to her innate obsession with hiving off the Internet from Australian users. In her address to the National Press Club in Canberra on June 24, she did give some clue about where the country is heading: 'Today, I am […] announcing that through the Online Safety Act's codes and standards framework, we will be moving to register three industry-prepared codes designed to limit children's access to high impact, harmful material like pornography, violent content, themes of suicide, self-harm and disordered eating.' (Is there no limit to this commissar's fears?) Under such codes, companies would 'agree to apply safety measures up and down the technology stack – including age assurance protections.' With messianic fervour, Inman Grant explained that the codes would 'serve as a bulwark and operate in concern with the new social media age limits, distributing more responsibility and accountability across eight sectors of the tech industry.' These would also not be limited in scope, applicable to enterprise hosting services, internet carriage services, and various 'access providers and search engines. I have concluded that each of these codes provide appropriate community safeguards.' From December 27, such technology giants as Google and Microsoft will have to use age-assurance technology for account holders when they sign in and 'apply tools and/or settings, like 'safe search' functionality, at the highest safety setting by default for an account holders its age verification systems indicate is likely to be an Australian child, designed to protect and prevent Australian children from accessing or being exposed to online pornography and high impact violence material in search results.' This is pursuant to Schedule 3 – Internet Search Engine Services Online Safety Code (Class 1C and Class 2 Material). How this will be undertaken has not, as yet, been clarified by Google or Microsoft. The companies have, however, been in the business of trialling a number of technologies. These include Zero-Knowledge Proof (ZKP) cryptography, which permits people to prove that an aspect of themselves is true without surrendering any other data; using large language models (LLMs) to discern an account holder's age based on browsing history; or the use of selfie verification and government ID tools. Specialists in the field of information technology have been left baffled and worried. 'I have not seen anything like this anywhere else in the world,' remarks IT researcher Lisa Given. This had 'kind of popped out, seemingly out of the blue.' Digital Rights Watch chair, Lizzie O'Shea, is of the view that 'the public deserves more of a say in how to balance these important human rights issues' while Justin Warren, founder of the tech analysis company PivotNine, sees it as 'a massive overreaction after years of police inaction to curtail the power of a handful of large foreign technology companies.' Then comes the issue of efficacy. Using the safety of children in censoring content and restricting technology is a government favourite. Whether the regulations actually protect children is quite another matter. John Pane, chair of Electronic Frontiers Australia (EFA), was less than impressed by the results from a recent age-assurance technology trial conducted to examine the effect of the teen social media ban. And all of this cannot ignore the innovative guile of young users, ever ready to circumvent any imposed restrictions. Inman Grant, in her attempts to limit the use of the Internet and infantilise the population, sees these age restricting measures as 'building a culture of online safety, using multiple interventions – just as we have done so successfully on our beaches.' This nonsensical analogy excludes the central theme of her policies, common to all censors in history: The people are not to be trusted, and paternalistic governors and regulators know better.

YouTube joins list of platforms banned for children under 16 in Australia
YouTube joins list of platforms banned for children under 16 in Australia

Indian Express

time30-07-2025

  • Business
  • Indian Express

YouTube joins list of platforms banned for children under 16 in Australia

After TikTok, Instagram, Facebook, X and Snapchat, YouTube has now joined the list of online platforms included in Australia's social media ban for children under the age of 16, BBC reported. YouTube was earlier excluded from the ban, citing the benefits and values it offers to younger Australians. The move comes a month after Australia's internet regulator urged the government to reverse a planned exemption for the video-sharing platform from its world's first national teen social media ban, Reuters noted. Australia is set to put curbs on social media usage of a million teens, beginning this December. It announced the ban initially in November last year by introducing the new Online Safety Amendment (Social Media Minimum Age) Bill 2024, which puts the onus on social media companies to prevent children from accessing their platforms. What does the ban on YouTube entail? Teenagers will still be able to view YouTube videos but will not be permitted to have an account, that is required for uploading content or interacting on the platform, according to BBC. Under the ban, the social media platforms, now including YouTube, will need to deactivate existing accounts and prohibit any new accounts, as well as stopping any workarounds and correcting errors, the report underlined. The government, currently awaiting a report on tests of age-checking products, said those results will influence enforcement of the ban, Reuters mentioned. Chief information security officer at cyber security firm Arctic Wolf, Adam Marre, welcoming the Australian government's move said that artificial intelligence has supercharged the spread of misinformation on social media platforms such as YouTube. 'The Australian government's move to regulate YouTube is an important step in pushing back against the unchecked power of big tech and protecting kids,' he wrote over an email. Feud between YouTube and Australia government? The government last year, at the time of introducing the Online Safety Amendment Bill, said that it would exempt YouTube due to its popularity with teachers, as per the report. However, social media platforms Meta, Facebook, Instagram, Snapchat, and TikTok, complained. Australia's internet regulator last month urged the government to overturn the exemption on YouTube, citing a survey that found 37 per cent of minors consuming harmful content on the site, in the worst demonstration for a social media platform, Reuters reported. The country's eSafety Commissioner Julie Inman Grant then recommended YouTube be added to the ban as it was 'the most frequently cited platform' where children aged 10 to 15 years saw 'harmful content'. She said social media companies deployed 'persuasive design features' such as recommendation-based algorithms and notifications to keep users online and 'YouTube has mastered those, opaque algorithms driving users down rabbit holes they're powerless to fight against', Reuters quoted. YouTube, over a blog post, accused Grant of giving inconsistent and contradictory advice, discounting the government's own research which found 69 per cent of parents considering the video platform suitable for those under 15, according to the report. 'The eSafety commissioner chose to ignore this data, the decision of the Australian Government and other clear evidence from teachers and parents that YouTube is suitable for younger users,' the report quoted Rachel Lord, YouTube's public policy manager for Australia and New Zealand. Last week, YouTube had told Reuters it had written to the government urging it 'to uphold the integrity of the legislative process'. YouTube also threatened a court challenge, as quoted by local media, however, YouTube has not confirmed the same. What has YouTube stated? In a statement on Wednesday, YouTube, a tech company owned by Google, argued against the ban, saying that the platform 'offers benefit and value to younger Australians.' YouTube also highlighted that its platform is used by nearly three-quarters of Australians aged 13 to 15, and should not be classified as social media, considering its main activity is hosting videos. 'Our position remains clear: YouTube is a video sharing platform with a library of free, high-quality content, increasingly viewed on TV screens. It's not social media,' a YouTube spokesperson stated over an email. The spokesperson also stated YouTube will 'consider next steps' and 'continue to engage' with the government. Why has the ban been introduced? Speaking to the media today, Prime Minister Anthony Albanese said, 'Social media is doing social harm to our children, and I want Australian parents to know that we have their backs… We know that this is not the only solution,' he said of the ban, 'but it will make a difference.' Australia Federal Communications Minister Anika Wells, as quoted by the BBC, said that while there is a place for social media, 'there's not a place for predatory algorithms targeting children'. On YouTube threatening a court challenge, Wells said, 'I will not be intimidated by legal threats when this is a genuine fight for the well-being of Australian kids.' What if YouTube and other tech companies refuses to comply? Under the ban, tech companies can fined up to A$50m ($32.5m; £25.7m) if they do not comply with the age restrictions, as per the BBC report. What is Australia's social media ban all about? The Australian law called the ban as one of the 'reasonable steps' to block teen users (below the age of 16) from accessing social media platforms. The Online Safety Amendment Bill 2024 stated, 'There are age restrictions for certain social media platforms. A provider of such a platform must take reasonable steps to prevent children who have not reached a minimum age from having accounts.' PM Albanese had announced via a statement, 'The bill also makes clear that no Australian will be compelled to use government identification (including Digital ID) for age assurance on social media. Platforms must offer reasonable alternatives to users.' Notably, access to online gaming and apps associated with education and health support (like Google Classroom) will be allowed, now barring YouTube from this exemption. Other parts of the world, including, Britain, Norway and European Union countries including France, Germany, Italy, Netherlands, have introduced similar curbs on social media usage among teens and children.

Proposed social media ban for under-16s gains support in Northland
Proposed social media ban for under-16s gains support in Northland

NZ Herald

time08-05-2025

  • Politics
  • NZ Herald

Proposed social media ban for under-16s gains support in Northland

NetSafe had expressed concern around how the ban will work and what the ramifications could be for youth. In Northland, Tai Tokerau Principals' Association spokesman and Whangārei principal Pat Newman was fully supportive. 'We know that in Whangārei we've had teenage suicides as a result of bullying on the internet.' He said some children had been 'scared stiff' to attend school because of cyber-bullying. Newman believed social media allowed for a disconnect that made it easy for young people to write 'nasty, vindictive things'. Children as young as 11 were sending explicit images through social media platforms, too. 'It's easy to send photos of yourself that in 10 years you may not want people to have seen.' Newman said children as young as 9 were organising fights online. The issue came to light in the media last year when a 14-year-old was left with a concussion and other injuries after a violent assault at the Fireworks Spectacular event. The video, circulated widely on social media, showed the boy being kicked in the head. Two students were also assaulted at Kerikeri High School last month, with principal Mike Clent concerned a video of the fight may have been circulating online. Newman believed social media encouraged 'inappropriate adult behaviour' to be undertaken by youngsters. 'We would not let a 10-year-old hop behind the wheel of a fast car and drive off without anybody supervising them,' he said. 'Yet we let them play with and use something just as lethal.' Newman acknowledged social media was a valuable tool in the right hands but people under 16 were still developing. Principals were doing all they could to educate and prevent harm but Newman said a level of responsibility needed to come from parents as well. Netsafe chief executive Brent Carey said Australia's Online Safety Amendment (Social Media Minimum Age) Bill was an example of legislative gaps. 'Our decades of work in this space have shown us the multifaceted nature of these challenges, and effective solutions typically require a more nuanced and long-term approach.' Carey said implementation of the bill and subsequent challenges were of significant concern. Some challenges with Australia's ban included exemptions for platforms like messaging apps, online gaming platforms and services for health and education. 'Such exemptions could lead to inconsistencies in online safety measures and potentially shift risks to less moderated environments.' He said the Australian Human Rights Commission had concerns the ban was a 'blunt instrument' that could inadvertently harm young people by cutting access to support networks. Whangārei Intermediate School learning support co-ordinator Christine Thomson supported the ban. She had observed that students between 10 and 13 years old frequently used social media without supervision. Thomson had seen situations where students had spoken to people posing as teens. Fights were also organised, filmed and posted 'immediately' online, she said. Cyber-bullying had driven some students to be so anxious they avoided school altogether as well. Thomson said the problem was difficult to fully police as pages or groups that were shut down often resurfaced under new profiles. Serious incidents were often reported to Netsafe or police, where required. She felt students were too young to fully understand the responsibility social media use required. Brodie Stone covers crime and emergency for the Northern Advocate. She has spent most of her life in Whangārei and is passionate about delving into issues that matter to Northlanders and beyond.

Banning teens from social media won't keep them safe. Regulating platforms might
Banning teens from social media won't keep them safe. Regulating platforms might

The Spinoff

time06-05-2025

  • Politics
  • The Spinoff

Banning teens from social media won't keep them safe. Regulating platforms might

The new member's bill misdirects attention from the systemic drivers of online harm and places the burden of online safety on young people themselves, while the systems that foster harm continue unchecked. A National MP's proposal to ban under-16s from social media is being pitched as a bold move to protect young people. But the reality is more complicated and far more concerning. If the National Party is serious about addressing the real harms young people face online, banning users is not the solution. Regulating platforms is. The Social Media Age-Appropriate Users Bill, a proposed member's bill led by backbencher Catherine Wedd, would require social media platforms to take 'all reasonable steps' to prevent under-16s from creating accounts. Although only a member's bill yet to be drawn from the biscuit tin, the bill was announced by prime minister Christopher Luxon via X and thereby has the PM's obvious stamp of approval. The bill echoes Australia's recently passed Online Safety Amendment (Social Media Minimum Age) Act 2024, which imposes significant penalties on platforms that fail to keep children under 16 off their services. Wedd, like many who support these measures, points to concerns about online bullying, addiction and other inappropriate content. These are real issues. But the bill misdirects attention from the systemic drivers of online harm and places the burden of online safety on young people themselves. A popular move, but a flawed premise This policy will likely have the support of parents, similar to the school phone ban – it is a visible, straightforward response to something that feels out of control. And it offers the comfort of doing something in the face of real concern. However, this kind of ban performs accountability but does not address where the real power lies. Instead, if the aim of the policy is to reduce online harm and increase online safety, then they should consider holding social media companies responsible for the design choices that expose young people to harm. For instance, according to Netsafe, the phone ban has not eliminated cyberbullying, harassment or image-based sexual abuse for our young people. At the heart of the proposal is the assumption that banning teens from social media will protect them. But age-based restrictions are easily circumvented. Young people already know how to create fake birthdates, or create secondary accounts, or use a VPN to bypass restrictions. And even if the verification process becomes more robust through facial recognition, ID uploads, or other forms of intrusive surveillance, it raises significant privacy concerns, especially for minors. Without additional regulatory safeguards, such measures may introduce further ways to harm users' rights by, for example, normalising digital surveillance. In practice, this kind of policy will not keep young people off social media. It will just push them into less visible, less regulated corners of the internet and into the very spaces where the risk of harm is often higher. Furthermore, there is a growing body of research – including my own – showing that online harm is not simply a function of age or access. It is shaped by the design of platforms, the content that is amplified, and the failures of tech companies to moderate harmful material effectively. Misdiagnosing the problem Online harm is real. But banning access is a blunt instrument. It does not address the algorithms that push disinformation, misogyny and extremism into users' feeds. And it does not fix the fact that social media companies are not accountable to New Zealand law or to the communities they serve. In contrast, the UK's Online Safety Act 2023 holds platforms legally responsible for systemic harm. It shifts the burden of online safety away from individual users and onto the tech companies who design and profit from these systems. New Zealand once had the opportunity to move in that direction. Under the previous government, the Department of Internal Affairs proposed an independent regulator and a new code-based system to oversee digital platforms. That work was shelved by the coalition government. Now, we're offered a ban instead. Some may argue that regulating big tech companies is too complex and difficult — that it is easier to restrict access. But that narrative lets platforms off the hook. Countries like the UK and those in the European Union have already taken meaningful steps to regulate social media, requiring companies to assess and reduce risks, improve transparency, and prioritise user safety. While these laws are imperfect, they prove regulation is possible when there is political will. Pretending otherwise leaves the burden on parents and young people, while the systems that foster harm continue unchecked. What real online safety could look like If the National Party, or the government, truly wants to protect young people online, it should start with the platforms, not the users. That means requiring social media companies to ensure user safety, from design to implementation and use. It may also require ensuring digital literacy is a core part of our education system, equipping rangatahi with the tools to critically navigate online spaces. We also need to address the systemic nature of online harm, including the rising tide of online misogyny, racism and extremism. Abuse does not just happen, it is intensified by platforms designed to maximise engagement, often at the expense of safety. Any serious policy must regulate these systems and not just user behaviour. That means independent audits, transparency about how content is promoted, and real consequences for platforms that fail to act. Harms are also unevenly distributed. Māori, Pasifika, disabled and gender-diverse young people are disproportionately targeted. A meaningful response must be grounded in te Tiriti and human rights and not just age limits. There's a certain political appeal to a policy that promises to 'protect kids', especially one that appears to follow global trends. But that does not mean it is the right approach. Young people deserve better. They deserve a digital environment that is safe, inclusive, and empowering.

Members' Bill To Protect Under 16s From Social Media Harm
Members' Bill To Protect Under 16s From Social Media Harm

Scoop

time05-05-2025

  • Politics
  • Scoop

Members' Bill To Protect Under 16s From Social Media Harm

National Tukituki MP Catherine Wedd has put forward a new members' bill to protect young people from social media harm by restricting access for under 16s. 'Social media is an extraordinary resource, but it comes with risks, and right now we aren't manging the risks for our young people well,' Catherine Wedd says. 'My Social Media Age-Appropriate Users Bill is about protecting young people from bullying, inappropriate content and social media addiction by restricting access for under 16-year-olds. 'The bill puts the onus on social media companies to verify that someone is over the age of 16 before they access social media platforms. Currently, there are no legally enforceable age verification measures for social media platforms in New Zealand. 'As a mother of four children I feel very strongly that families and parents should be better supported when it comes to overseeing their children's online exposure. 'Parents and principals are constantly telling me they struggle to manage access to social media and are worried about the effect it's having on their children. 'The bill closely mirrors the approach taken in Australia, which passed the Online Safety Amendment (Social Media Minimum Age) Bill in December 2024.' 'Other jurisdictions are also taking action. Texas recently passed legislation which bans under 18s from social media use and the UK, the EU and Canada all have similar work in train. 'This bill builds on National's successful and successful cell phone ban in schools and reinforces the Government's commitment to setting our children up for success.' Notes:

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store