logo
#

Latest news with #eSafetyCommission

Police searched the man's laptop for malware. What they found is becoming all too common
Police searched the man's laptop for malware. What they found is becoming all too common

The Age

time2 days ago

  • The Age

Police searched the man's laptop for malware. What they found is becoming all too common

'We have commercial image generation sites that you can go to right now, and you don't even have to look for child sexual abuse material because the generation of [it] is so popular that these sites often have trending pages, and I've seen sections where the keyword is 'pre-teen', or 'tween', or 'very young'.' In a 2024 report, the Internet Watch Foundation (IWF) found a 380 per cent increase in reported cases of AI-generated child sexual abuse content online, noting that the material was becoming 'significantly more realistic' and that perpetrators were finding 'more success generating complex 'hardcore' scenarios' involving penetrative sexual activity, bestiality or sadism. 'One user shared an anonymous webpage containing links to fine-tuned models for 128 different named victims of child sexual abuse.' Internet Watch Foundation's July 2024 AI child sexual abuse material report The IWF found evidence that AI models that depict known child abuse victims and famous children were being created and shared online. In some of the most perverse cases, this could include the re-victimisation of 'popular' real-life child abuse victims, with AI models allowing perpetrators to generate new images of an abused minor. The report acknowledged that the usage of these fine-tuned models, known as LoRAs, likely went much deeper than the IWF could assess, thanks to end-to-end encrypted, peer-to-peer networks that were essentially inaccessible. Moreover, Australia's eSafety Commission warns that child sexual abuse material produced by AI is 'highly scalable'. '[It requires] little effort to reproduce en masse once a model is capable of generating illegal imagery,' a spokesperson said. Commercial interests The rapid escalation of the amount of content available online is partially attributed to how AI has enabled the commercialisation of child sexual abuse material. 'Offenders who are quite adept at creating material are essentially taking orders to produce content, and this material is increasingly realistic,' Salter says. Jones says that in the span of his career, he's seen the provision of child sexual abuse content go from physical photocopies being shared in small groups to it being available online in a couple of clicks. 'Unfortunately, there is a particular audience for child sexual abuse material, and what AI can do is generate that content, so [commercialisation] is serving a demand that is out there.' In one of the biggest stings involving an AI-child abuse enterprise, Danish police, in conjunction with Europol, uncovered a subscription service that commercialised access to the content. The global operation saw two Australian men charged, and 23 others apprehended around the world. 'There were over 237 subscribers to that one matter,' Moses says of Operation Cumberland. 'When we talk about proliferation and people profiting from this type of activity, this is of great concern to us.' Swamped by the growing sea of content, officers now face the difficulty of identifying which situations depict real children being abused, as opposed to an AI-generated child who doesn't exist. 'It also means that police have to spend quite a lot of time looking at material to determine whether it's real or not, which is quite a serious trauma risk for police as well,' Salter says. Moses from the ACCCE agrees that it's 'very difficult work' for officers. 'Whilst it is very confronting material, it doesn't compare to the trauma that child victims endure, and there's very much a focus on identifying victims.' The influx of AI-generated content has complicated its mission in many ways, Moses says, including by robbing crucial resources from ACCCE's primary goal of rescuing children who are being abused. 'It takes a lot of time to identify real victims, and the concern for us ... is that the [AI-generated content] is becoming increasingly harder [to detect], and it takes time away from our people who are trying to identify real victims.' Law enforcement 'overwhelmed' While prosecutions for offences involving fake abuse material have increased, the rate hasn't kept up with the pace of the increase in the amount of content found online. Salter says resourcing is one of the biggest challenges facing law enforcement. 'Law enforcement is so overwhelmed with really egregious online sexual exploitation cases … their primary priority is to prevent and rescue the abuse of actual kids.' He says it's a struggle he's heard across all jurisdictions. 'They're really struggling in terms of people power, in terms of access to the technology that they need to conduct these investigations and to store that amount of material,' Salter says. 'There needs to be a huge uplift right across the law enforcement space.' Additionally, AI-generated child sexual abuse content requires a whole reset of the way the content is detected. Old machine methods of detecting the content online involved scraping for verified abuse content, which means it has to have already been assessed by a human as illegal content to be detected. 'The obvious challenge we see with AI-generated material is that it's all new, and so it's very unlikely, through current detection technologies, that we can proactively screen it,' Salter says. Unregulated threat let loose It's a global issue that crosses jurisdictions and exists on the internet's severely under-regulated new frontier. But that hasn't deterred Australia's eSafety commissioner, Julie Inman Grant, from introducing world-first industry standards to hold tech companies to account for the content they platform. The standards came into force in December 2024 and require storage services like Apple's iCloud and Google Drive, messaging services, and online marketplaces that offer generative AI models to prevent their products from being misused to store or distribute child sexual abuse material and pro-terror content. 'We have engaged with both AI purveyors and the platforms and libraries that host them to ensure they are aware of their obligations under the standards,' an eSafety commission spokesperson said. 'We believe the standards are a significant step in regulating unlawful and seriously harmful content and align with our broader efforts to ensure that AI tools, such as those used to create deepfakes, are held to the highest safety standards.' The recent passage of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 also expanded on the available criminal offences relating to non-consensual, sexually explicit AI-generated material. While international companies can face multi-million dollar penalties for breaches of the eSafety Commission's standards in Australia, major tech players like Meta are increasingly adopting end-to-end encryption, which means even the companies themselves can't see what content they're hosting, let alone law enforcement. Interpol works at the forefront of these issues, often acting as a bridge between authorities and the private sector. Jones observes that while interventions like Australia's new standards play an important role in setting high standards for tech companies, encryption and other privacy policies make it 'very hard for law enforcement to get those data sets'. International cooperation is crucial for successfully prosecuting commercial child sexual abuse content cases, and Jones says that in best practice examples, when a global chain is identified, the tech industry is brought in as part of the investigation. 'I'm seeing more of an involvement in the tech sector around supporting law enforcement. But that's sometimes at odds with encryption and things like that,' Jones says. Loading 'I think the tech industry has a duty of care to the communities that they serve. So I don't think it's good enough to say, 'Oh, well, it's encrypted. We don't know what's there'.' Salter takes a more pessimistic view of the tech industry's actions, arguing that most companies are moving away from, not towards, proactively monitoring the presence of child sexual abuse content. 'The emergence of AI has been something of a vortex of doom in the online child protection space,' Salter says. Online child protection efforts were already overwhelmed, he says, before the tech sector 'created a new threat to children' and 'released [it] into the wild with no child protection safeguards'. 'And that's very typical behaviour.'

Police searched the man's laptop for malware. What they found is becoming all too common
Police searched the man's laptop for malware. What they found is becoming all too common

Sydney Morning Herald

time2 days ago

  • Sydney Morning Herald

Police searched the man's laptop for malware. What they found is becoming all too common

'We have commercial image generation sites that you can go to right now, and you don't even have to look for child sexual abuse material because the generation of [it] is so popular that these sites often have trending pages, and I've seen sections where the keyword is 'pre-teen', or 'tween', or 'very young'.' In a 2024 report, the Internet Watch Foundation (IWF) found a 380 per cent increase in reported cases of AI-generated child sexual abuse content online, noting that the material was becoming 'significantly more realistic' and that perpetrators were finding 'more success generating complex 'hardcore' scenarios' involving penetrative sexual activity, bestiality or sadism. 'One user shared an anonymous webpage containing links to fine-tuned models for 128 different named victims of child sexual abuse.' Internet Watch Foundation's July 2024 AI child sexual abuse material report The IWF found evidence that AI models that depict known child abuse victims and famous children were being created and shared online. In some of the most perverse cases, this could include the re-victimisation of 'popular' real-life child abuse victims, with AI models allowing perpetrators to generate new images of an abused minor. The report acknowledged that the usage of these fine-tuned models, known as LoRAs, likely went much deeper than the IWF could assess, thanks to end-to-end encrypted, peer-to-peer networks that were essentially inaccessible. Moreover, Australia's eSafety Commission warns that child sexual abuse material produced by AI is 'highly scalable'. '[It requires] little effort to reproduce en masse once a model is capable of generating illegal imagery,' a spokesperson said. Commercial interests The rapid escalation of the amount of content available online is partially attributed to how AI has enabled the commercialisation of child sexual abuse material. 'Offenders who are quite adept at creating material are essentially taking orders to produce content, and this material is increasingly realistic,' Salter says. Jones says that in the span of his career, he's seen the provision of child sexual abuse content go from physical photocopies being shared in small groups to it being available online in a couple of clicks. 'Unfortunately, there is a particular audience for child sexual abuse material, and what AI can do is generate that content, so [commercialisation] is serving a demand that is out there.' In one of the biggest stings involving an AI-child abuse enterprise, Danish police, in conjunction with Europol, uncovered a subscription service that commercialised access to the content. The global operation saw two Australian men charged, and 23 others apprehended around the world. 'There were over 237 subscribers to that one matter,' Moses says of Operation Cumberland. 'When we talk about proliferation and people profiting from this type of activity, this is of great concern to us.' Swamped by the growing sea of content, officers now face the difficulty of identifying which situations depict real children being abused, as opposed to an AI-generated child who doesn't exist. 'It also means that police have to spend quite a lot of time looking at material to determine whether it's real or not, which is quite a serious trauma risk for police as well,' Salter says. Moses from the ACCCE agrees that it's 'very difficult work' for officers. 'Whilst it is very confronting material, it doesn't compare to the trauma that child victims endure, and there's very much a focus on identifying victims.' The influx of AI-generated content has complicated its mission in many ways, Moses says, including by robbing crucial resources from ACCCE's primary goal of rescuing children who are being abused. 'It takes a lot of time to identify real victims, and the concern for us ... is that the [AI-generated content] is becoming increasingly harder [to detect], and it takes time away from our people who are trying to identify real victims.' Law enforcement 'overwhelmed' While prosecutions for offences involving fake abuse material have increased, the rate hasn't kept up with the pace of the increase in the amount of content found online. Salter says resourcing is one of the biggest challenges facing law enforcement. 'Law enforcement is so overwhelmed with really egregious online sexual exploitation cases … their primary priority is to prevent and rescue the abuse of actual kids.' He says it's a struggle he's heard across all jurisdictions. 'They're really struggling in terms of people power, in terms of access to the technology that they need to conduct these investigations and to store that amount of material,' Salter says. 'There needs to be a huge uplift right across the law enforcement space.' Additionally, AI-generated child sexual abuse content requires a whole reset of the way the content is detected. Old machine methods of detecting the content online involved scraping for verified abuse content, which means it has to have already been assessed by a human as illegal content to be detected. 'The obvious challenge we see with AI-generated material is that it's all new, and so it's very unlikely, through current detection technologies, that we can proactively screen it,' Salter says. Unregulated threat let loose It's a global issue that crosses jurisdictions and exists on the internet's severely under-regulated new frontier. But that hasn't deterred Australia's eSafety commissioner, Julie Inman Grant, from introducing world-first industry standards to hold tech companies to account for the content they platform. The standards came into force in December 2024 and require storage services like Apple's iCloud and Google Drive, messaging services, and online marketplaces that offer generative AI models to prevent their products from being misused to store or distribute child sexual abuse material and pro-terror content. 'We have engaged with both AI purveyors and the platforms and libraries that host them to ensure they are aware of their obligations under the standards,' an eSafety commission spokesperson said. 'We believe the standards are a significant step in regulating unlawful and seriously harmful content and align with our broader efforts to ensure that AI tools, such as those used to create deepfakes, are held to the highest safety standards.' The recent passage of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 also expanded on the available criminal offences relating to non-consensual, sexually explicit AI-generated material. While international companies can face multi-million dollar penalties for breaches of the eSafety Commission's standards in Australia, major tech players like Meta are increasingly adopting end-to-end encryption, which means even the companies themselves can't see what content they're hosting, let alone law enforcement. Interpol works at the forefront of these issues, often acting as a bridge between authorities and the private sector. Jones observes that while interventions like Australia's new standards play an important role in setting high standards for tech companies, encryption and other privacy policies make it 'very hard for law enforcement to get those data sets'. International cooperation is crucial for successfully prosecuting commercial child sexual abuse content cases, and Jones says that in best practice examples, when a global chain is identified, the tech industry is brought in as part of the investigation. 'I'm seeing more of an involvement in the tech sector around supporting law enforcement. But that's sometimes at odds with encryption and things like that,' Jones says. Loading 'I think the tech industry has a duty of care to the communities that they serve. So I don't think it's good enough to say, 'Oh, well, it's encrypted. We don't know what's there'.' Salter takes a more pessimistic view of the tech industry's actions, arguing that most companies are moving away from, not towards, proactively monitoring the presence of child sexual abuse content. 'The emergence of AI has been something of a vortex of doom in the online child protection space,' Salter says. Online child protection efforts were already overwhelmed, he says, before the tech sector 'created a new threat to children' and 'released [it] into the wild with no child protection safeguards'. 'And that's very typical behaviour.'

Location-sharing apps linked to increased risk of digital coercive control, eSafety Commission research finds
Location-sharing apps linked to increased risk of digital coercive control, eSafety Commission research finds

ABC News

time14-05-2025

  • ABC News

Location-sharing apps linked to increased risk of digital coercive control, eSafety Commission research finds

Nearly one in five young adults believe tracking a partner's location is to be expected in a relationship, new research has found. The eSafety Commission study has linked location-sharing apps with an increased risk of coercive control. More than 13 per cent of the 2,000 adults surveyed said it was reasonable for a partner to monitor them using location-sharing apps like Life360 and Find My iPhone. "It isn't an exaggeration to say preventing tech-based coercive control, and the attitudes that it condones can save lives," eSafety Commissioner Julie Inman Grant said. "Young people have become so accustomed to certain levels of online tracking by family and friends … that it no longer raises red flags in a relationship." Tech-based coercive control is defined as a behaviour pattern that uses digital technology to control, manipulate and create fear in a current or former relationship. About a quarter of adults surveyed agreed it was necessary to have a partner's passwords and codes for their personal devices. A similar study published by Griffith University earlier this year found location-sharing apps had become "normalised" as a way for parents to monitor their children. Location-sharing apps like Life360 are marketed to "protect your family with easy-to-use location tracking". Over the last three years, Griffith University PhD candidate María Atiénzar Prieto has studied perceptions of digital coercive control. Her survey of more than 1,000 young people said the acceptance of location sharing "starts at home". "We found parents increasingly used tracking apps to monitor children and provide safety," she said. The 2023 murder of Lilie James put tech-facilitated control in the headlines, when it was revealed last March that her killer had tracked her location through Snapmaps. The court heard her killer's behaviour was normalised as "coming from a place of kindness and love". Ms Atiénzar Prieto found the technology was "the perfect tool for perpetrators of domestic violence". "Participants in my study discussed how the first time they were monitored was in an environment of care and love and that was then being transferred to other relationships," Ms Atiénzar Prieto said. She said parents needed to explain that they're using these apps for safety, but that there were risks in using it outside the family. According to the eSafety commission, the average Australian household has about 40 inter-connected devices. The eSafety Commission wants more education for young people to understand digital coercion and consent, and for norms around digital privacy respect in relationships be reshaped. "There needs to be conversations about setting digital boundaries and what consent looks like" Ms Inman Grant said. "We're not saying a couple in a relationship or family members shouldn't share their location online but it's the expectation to track a romantic partner's movements, which is problematic — even dangerous". Ms Inman Grant is calling on companies to adopt more safety by design features and mindful engineering of apps to prevent misuse. "We had one situation where a young woman was being surveilled through her electronic cat-feeder" she said. "It was used by her former partner to track her movements in the house and to see who she was with. "The thing that's so insidious about coercive control is that a very determined predator can always exploit the loopholes." In serious cases, victim-survivors have had to begin new digital identities and accounts to stay safe. Ashton Wood is the founder and chief executive DV Safe Phone, which provides free phones to victim-survivors as part of their safety and escape plan. "Unfortunately, the only way of really knowing that you're not being tracked or monitored as a domestic violence victim is to have a device that the partner doesn't know about at all, so it doesn't even exist, as far as they're concerned," he said. "As technology gets better, perpetrators are always finding ways to weaponise it, and then the manufacturers and software providers find ways to lock it down, and then the perpetrators find a way around that." DV Safe Phone has provided more than 12,000 free phones in the past five years and asks people to donate their old phones to help. The charity has been handing the fully reset phones out to police stations, safe houses, hospitals and domestic violence agencies to distribute. "Once they're out of danger, we generally recommend destroying the old phone, unless they need it for evidence, and then keep [the new] phone because that hasn't been compromised."

‘Censorship': United States government slams Australia for ‘coercing' tech companies
‘Censorship': United States government slams Australia for ‘coercing' tech companies

News.com.au

time08-05-2025

  • Politics
  • News.com.au

‘Censorship': United States government slams Australia for ‘coercing' tech companies

The United States has accused Australia of 'coercing' Elon Musk's social media platform, X, into 'censoring' free speech, as part of a broader complaint about foreign countries pressuring the tech giants. At issue is a decision taken by Australia's eSafety Commissioner to require that X remove a post by Chris Elston, a Canadian campaigner against 'gender ideology'. Mr Elston is known online as 'Billboard Chris'. The tweet in question, from February of 2024, took aim at an Australian transgender activist, Teddy Cook, who had been appointed to an advisory panel at the World Health Organisation. Mr Elston misgendered Mr Cook, who identifies as male, and suggested global guidelines for dealing with trans issues were being written by 'people who belong in psychiatric wards'. He later acknowledged it was 'not my nicest tweet ever', but insisted it was accurate. 'These kids are lost and confused and they're being lied to, but there are clearly psychiatric issues, and as per all the scientific evidence, the children who end up in these gender clinics are struggling with various mental health comorbidities,' Mr Elston said. Mr Cook complained to the eSafety Commission, which issued a takedown request for the post, saying it 'deliberately' misgendered him in a way 'likely intended to invalidate and mock the complainant's gender identity'. X and Mr Elston both challenged that decision, prompting a more formal removal order. Mr Elston's tweet is now geoblocked in Australia, but remains visible overseas. The Bureau of Democracy, Human Rights and Labour, which is housed in America's State Department, has cited the case as a reason to be 'concerned' about governments 'coercing' tech companies. 'The Department of State is deeply concerned about efforts by governments to coerce American tech companies into targeting individuals for censorship. Freedom of expression must be protected, online and offline,' the bureau said in a statement. 'Examples of this conduct are troublingly numerous. EU Commissioner Thierry Breton threatened X for hosting political speech; Turkiye fined Meta for refusing to restrict content about protests; and Australia required X to remove a post criticising an individual for promoting gender ideology. 'Even when content may be objectionable, censorship undermines democracy, suppresses political opponents, and degrades public safety. 'The United States opposes efforts to undermine freedom of expression. As (American Secretary of State Marco) Rubio said, our diplomacy will continue to place an emphasis on promoting fundamental freedoms.' Here's a very brief summary of the two non-Australian examples. Mr Breton, a Frenchman who was, at the time, commissioner for the internal market of the European Union, was accused of going rogue when he sent a letter to Mr Musk threatening consequences if content on X placed European Union citizens at risk of 'serious harm'. Turkey fined Meta, the owner of Facebook and Instagram, for resisting pressure from President Recep Tayyip Erdogan's government to restrict content from its political opponents. Meta said the content was 'clearly in the public interest'. 'Government requests to restrict speech online alongside threats to shut down online services are severe and have a chilling effect on people's ability to express themselves,' the company said in a statement at the time. Back to Australia. Last month, the eSafety Commissioner and Mr Elston both testified before the Administrative Review Tribunal in Melbourne. The Commissioner argued the X post was likely 'intended to have an effect of causing serious harm'. Mr Elston was asked why he had chosen to write the post. 'Because the World Health Organisation has global influence,' he said. 'We should have evidence-based care.' Mr Elston is being supported by the Alliance Defending Freedom, a conservative American legal advocacy group, and Australia's Human Rights Law Alliance, which is associated with the Christian lobby. 'The decision of Australian authorities to prevent Australian citizens from hearing and evaluating information about gender ideology is a patronising affront to the principles of democracy,' said Robert Clarke, ADF International's director of advocacy. 'The confidence of the Australian eSafety commissioner to censor citizens of Canada on an American platform shows the truly global nature of the free speech crisis. 'Speaking up for free speech is critical at this juncture, and we're proud to be backing Billboard Chris as he does just that.' Mr Elston himself has welcomed the US State Department's intervention. 'It's tremendous to have the State Department support what we all know is true: free speech is a fundamental right, critical to a democratic society,' he said. 'If our free speech can't be protected when we speak out against the greatest child abuse scandal in the world right now, when can it be?'

EXCLUSIVE Marco Rubio cracks down on Australia for 'censoring' social media posts critical of trans orgy activist
EXCLUSIVE Marco Rubio cracks down on Australia for 'censoring' social media posts critical of trans orgy activist

Daily Mail​

time07-05-2025

  • Business
  • Daily Mail​

EXCLUSIVE Marco Rubio cracks down on Australia for 'censoring' social media posts critical of trans orgy activist

is calling on foreign countries to end their online censorship practices against U.S. companies in an effort to preserve 'free expression.' It comes as the Australian government is embroiled in a legal battle with Elon Musk 's X social media platform and an advocate against transgender treatments for minors. The government there sought to censor the online activity of an American company and one of its users. It placed a geo-based ban on those in the country viewing an X post because the user known as 'Billboard Chris' used the biological pronouns of a transgender individual and Australian citizen. 'I think what this does is put some pressure on the Australian government, because nobody likes to have the United States Government cracking down on them,' Chris Elston told the Daily Mail when asked about the State Department weighing in on the fight. Last week the State Department's Bureau of Democracy, Human Rights, & Labor (DRL) called out Australia, the European Union, and Turkey for their recent actions suppressing online activity related to political speech, gender ideology and peaceful protests. Lawyers for X and Elston, who is Canadian, legally challenged the takedown order by the Australian government's eSafety Commission and appeared last month for a five-day hearing at an administrative review tribunal in Melbourne. While the panel originally created to take down nude images of minors can control what people in Australia publish and see online, they are also allegedly seeking to police the online activity of the world when it relates to anything posted about Australians. The post was geo-blocked from being viewed in Australia and X was sent a take down order by the country's eSafety Commission, which the platform is now fighting in Australian court If someone tries to look at the post Elston made over a year ago, they will instead receive the message: 'This post from @BillboardChris has been withheld in Australia in response to a legal demand.' The U.S. has said that the way other countries treat American companies could affect trade relationships – and Trump has already threatened or implemented mass tariff policies nationwide. 'They've made freedom of speech a critical component of their entire campaign and how they manage when they want other countries to do the same, it's a big issue for the Trump administration,' Elston told the Daily Mail. He added: 'So they're not going to be happy that other countries are doing petty censorship about people on the other side of the world.' The DRL Bureau wrote in a statement on its X account on May 1: 'The Department of State is deeply concerned about efforts by governments to coerce American tech companies into targeting individuals for censorship.' 'Freedom of expression must be protected – online and offline,' the statement added. 'Examples of this conduct are troublingly numerous.' 'EU Commissioner Thierry Breton threatened X for hosting political speech; Türkiye fined Meta for refusing to restrict content about protests; and Australia required X to remove a post criticizing an individual for promoting gender ideology,' according to the statement. 'Even when content may be objectionable, censorship undermines democracy, suppresses political opponents, and degrades public safety. The United States opposes efforts to undermine freedom of expression.' 'As [Secretary Marco Rubio] said, our diplomacy will continue to place an emphasis on promoting fundamental freedoms,' it concluded. The State Department's Bureau of Democracy, Human Rights, & Labor (DRL) on Mday 1, 2025 called out Australia, the European Union and Turkey for their recent actions suppressing online activity related to political speech, gender ideology and peaceful protests of U.S. companies The whole saga with Australia started last year when Elston reposted a Daily Mail article about an Australian transgender member of the United Nations trans health panel under the World Health Organization. This included posting images of Teddy Cook practicing public nudity and attending bondage parties and trans orgies. Cook filed a complaint with Australia's eSafety Commission against news outlets that published the images. Elston reposted the article with the caption: 'This woman (yes, she's female) is part of a panel of 20 'experts' hired by the @WHO to draft their policy on caring for 'trans people.' 'People who belong in psychiatric wards are writing the guidelines for people who belong in psychiatric wards,' he added in the X post from February 2024. Australia took issue with the fact that he reposted an article that 'doxxed' where one of its citizens worked as well as the images, and the fact that the post 'misgendered' the transgender WHO panel member. Elston detailed how lawyers with X informed him of Australia's take down notice. Elston, a Canadian citizen, has gone viral for holding signs in public spaces with messages slamming gender ideology and the implementation of transgender treatments on minors. He told Daily Mail he was fined in Australia for the same actions he takes in other parts of the world The Alliance Defending Freedom (ADF) came forward to represent Elston in his defense in Australia. 'Everyone has the right to speak the truth online - especially when discussing significant societal debates,' ADF International legal counsel Robert Clarke told the Daily Mail. 'Australian authorities have significantly overreached by attempting to silence a Canadian campaigner on a U.S. social media platform,' he added. 'We're delighted to stand by Chris as he fights his legal battle against censorship, and welcome the State Department's affirmation that Australia, like all governments, must uphold free speech.' The Australian eSafety Commissioner defended the decision to censor Elston in a tribunal last month by claiming his use of biological pronounce was 'intended to have an effect of causing serious harm'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store