17-07-2025
Calls to criminalise possession and use of AI tools that create child abuse material
Child safety advocates have met at Parliament House to confront the problem of child safety in the age of Artificial Intelligence. From the rising prevalence of deepfake imagery to the availability of so-called nudify apps, the use of A-I to sexually exploit children is growing exponentially - and there is concern Australian laws are falling behind.
The roundtable was convened by the International Centre for Exploited and Missing Children Australia, or ICMEC.
CEO Colm Gannon says Australia's current child protection framework, introduced just three years ago, fails to address the threat of AI - and he's calling on the government to make it a priority.
"(By) bringing it into the national framework that was brought around in 2021, a 10 year framework that doesn't mention AI. We're working to hopefully develop solutions for government to bring child safety in the age of AI at the forefront." Earlier this year, the United Kingdom became the first country in the world to introduce A-I sexual abuse offences to protect children from predators generating images with artificial intelligence.
Colm Gannon is leading the calls for similar legislation in Australia. "What we need to do is look at legislation that's going to be comprehensive - comprehensive for protection, comprehensive for enforcement and comprehensive to actually be technology neutral." Former Australian of the Year Grace Tame says online child abuse needs to be addressed at a society-wide level. "Perpetrators are not just grooming their individual victims, they're grooming their entire environments to create a regime of control in which abuse can operate in plain sight. And there are ways through education that needs to be aimed at, not just children and the people who work with children but the entire community, there are ways to identify these precipitating behaviours that underpin the contact offending framework, it's things like how offenders target victims, which victims they are targeting specifically." In 2023, intelligence company Graphika reported that the use of synthetic, non-consensual, intimate imagery was becoming more widespread - moving from being available in niche internet forums, to an automated and scaled online business. It found that 34 of such image providers received more than 24 million unique visitors to their websites, while links accessing these services increased on platforms including Reddit and X. As part of Task Force Argos, former policeman Professor Jon Rouse pioneered Australia's first proactive operations against internet child sex offenders, and has also chaired INTERPOL's Covert Internet Investigations Group. Now working with Childlight Australia, he says big tech providers like Apple have a responsibility to build safety controls into their products. "Apple has got accountability here as well, they just put things on the app store, they get money every time somebody downloads it, but there's no regulation around this, you create an app you put it on the app store. Who checks and balances what the damages that, that's going to cause is. No-one. The tragedy is we're at a point now that we have to ban our kids from social media. Because we can't rely on any sector of the industry to ban our kids. Which is pretty sad."
Apple has not responded to a request for comment.
But in 2021 it announced it had introduced new features designed to help keep young people safe - such as sending warnings when children receive, or attempt to send, images or videos containing nudity. Although potentially dangerous, AI can also be used to detect grooming behaviour and child sexual abuse material. Colm Gannon from ICMEC says these opportunities need to be harnessed.
"The other thing that we want to do is use law enforcement as a tool to help identify victims. There is technology out there that can assist in rapid and easy access to victim ID and what's happening at the moment is law enforcement are not able to use that technology."