
Tech giants fail to tackle heinous crimes against kids
An eSafety report has revealed that Apple, Google, Meta, Microsoft, Discord, WhatsApp, Snapchat and Skype aren't doing enough to crackdown on online child sexual abuse despite repeated calls for action.
It comes three years after the Australian watchdog found the platforms weren't proactively detecting stored abuse material or using measures to find live-streams of child harm.
"While there are a couple of bright spots, basically, most of the big ones are not lifting their game when it comes to the most heinous crimes against children," Commissioner Julie Inman Grant told ABC radio on Wednesday.
The latest report revealed Apple and Google's YouTube weren't tracking the number of user reports about child sexual abuse, nor could they say how long it took to respond to the allegations.
The companies also didn't provide the number of trust and safety staff.
The US National Centre for Missing and Exploited Children suggests there were tip-offs about more than 18 million unique images and eight million videos of online sexual abuse in 2022.
"What worries me is when companies say, 'We can't tell you how many reports we've received' ... that's bollocks, they've got the technology," Ms Inman Grant said.
"What's happening is we're seeing a winding back of content moderation and trust and safety policies and an evisceration of trust and safety teams, so they're de-investing rather than re-upping."
It comes as YouTube has been arguing against being included in a social media ban for under-16-year-olds on the basis that it is not a social media platform but rather is often used as an educational resource.
The watchdog commissioner had recommended YouTube be included based on research that showed children were exposed to harmful content on the platform more than on any other.
Meanwhile, other findings in the new report include that none of the giants had deployed tools to detect child sexual exploitation livestreaming on their services, three years after the watchdog first raised the alarm.
A tool called hash matching, where copies of previously identified sexual abuse material can be detected on other platforms, wasn't being used by most of the companies, and they are failing to use resources to detect grooming or sexual extortion.
There were a few positive improvements with Discord, Microsoft and WhatsApp generally increasing hash-matching tools and the number of sources to inform that technology.
"While we welcome these improvements, more can and should be done," Ms Inman Grant said.
This report is part of legislation passed last year that legally enforces periodic transparency notices to tech companies, meaning they must report to the watchdog every six months for two years about tackling child sexual abuse material.
The second report will be available in early 2026.
Tech giants are failing to track the reports of online child sexual abuse despite figures suggesting more than 16 million photos and videos were found on the platforms.
An eSafety report has revealed that Apple, Google, Meta, Microsoft, Discord, WhatsApp, Snapchat and Skype aren't doing enough to crackdown on online child sexual abuse despite repeated calls for action.
It comes three years after the Australian watchdog found the platforms weren't proactively detecting stored abuse material or using measures to find live-streams of child harm.
"While there are a couple of bright spots, basically, most of the big ones are not lifting their game when it comes to the most heinous crimes against children," Commissioner Julie Inman Grant told ABC radio on Wednesday.
The latest report revealed Apple and Google's YouTube weren't tracking the number of user reports about child sexual abuse, nor could they say how long it took to respond to the allegations.
The companies also didn't provide the number of trust and safety staff.
The US National Centre for Missing and Exploited Children suggests there were tip-offs about more than 18 million unique images and eight million videos of online sexual abuse in 2022.
"What worries me is when companies say, 'We can't tell you how many reports we've received' ... that's bollocks, they've got the technology," Ms Inman Grant said.
"What's happening is we're seeing a winding back of content moderation and trust and safety policies and an evisceration of trust and safety teams, so they're de-investing rather than re-upping."
It comes as YouTube has been arguing against being included in a social media ban for under-16-year-olds on the basis that it is not a social media platform but rather is often used as an educational resource.
The watchdog commissioner had recommended YouTube be included based on research that showed children were exposed to harmful content on the platform more than on any other.
Meanwhile, other findings in the new report include that none of the giants had deployed tools to detect child sexual exploitation livestreaming on their services, three years after the watchdog first raised the alarm.
A tool called hash matching, where copies of previously identified sexual abuse material can be detected on other platforms, wasn't being used by most of the companies, and they are failing to use resources to detect grooming or sexual extortion.
There were a few positive improvements with Discord, Microsoft and WhatsApp generally increasing hash-matching tools and the number of sources to inform that technology.
"While we welcome these improvements, more can and should be done," Ms Inman Grant said.
This report is part of legislation passed last year that legally enforces periodic transparency notices to tech companies, meaning they must report to the watchdog every six months for two years about tackling child sexual abuse material.
The second report will be available in early 2026.
Tech giants are failing to track the reports of online child sexual abuse despite figures suggesting more than 16 million photos and videos were found on the platforms.
An eSafety report has revealed that Apple, Google, Meta, Microsoft, Discord, WhatsApp, Snapchat and Skype aren't doing enough to crackdown on online child sexual abuse despite repeated calls for action.
It comes three years after the Australian watchdog found the platforms weren't proactively detecting stored abuse material or using measures to find live-streams of child harm.
"While there are a couple of bright spots, basically, most of the big ones are not lifting their game when it comes to the most heinous crimes against children," Commissioner Julie Inman Grant told ABC radio on Wednesday.
The latest report revealed Apple and Google's YouTube weren't tracking the number of user reports about child sexual abuse, nor could they say how long it took to respond to the allegations.
The companies also didn't provide the number of trust and safety staff.
The US National Centre for Missing and Exploited Children suggests there were tip-offs about more than 18 million unique images and eight million videos of online sexual abuse in 2022.
"What worries me is when companies say, 'We can't tell you how many reports we've received' ... that's bollocks, they've got the technology," Ms Inman Grant said.
"What's happening is we're seeing a winding back of content moderation and trust and safety policies and an evisceration of trust and safety teams, so they're de-investing rather than re-upping."
It comes as YouTube has been arguing against being included in a social media ban for under-16-year-olds on the basis that it is not a social media platform but rather is often used as an educational resource.
The watchdog commissioner had recommended YouTube be included based on research that showed children were exposed to harmful content on the platform more than on any other.
Meanwhile, other findings in the new report include that none of the giants had deployed tools to detect child sexual exploitation livestreaming on their services, three years after the watchdog first raised the alarm.
A tool called hash matching, where copies of previously identified sexual abuse material can be detected on other platforms, wasn't being used by most of the companies, and they are failing to use resources to detect grooming or sexual extortion.
There were a few positive improvements with Discord, Microsoft and WhatsApp generally increasing hash-matching tools and the number of sources to inform that technology.
"While we welcome these improvements, more can and should be done," Ms Inman Grant said.
This report is part of legislation passed last year that legally enforces periodic transparency notices to tech companies, meaning they must report to the watchdog every six months for two years about tackling child sexual abuse material.
The second report will be available in early 2026.
Tech giants are failing to track the reports of online child sexual abuse despite figures suggesting more than 16 million photos and videos were found on the platforms.
An eSafety report has revealed that Apple, Google, Meta, Microsoft, Discord, WhatsApp, Snapchat and Skype aren't doing enough to crackdown on online child sexual abuse despite repeated calls for action.
It comes three years after the Australian watchdog found the platforms weren't proactively detecting stored abuse material or using measures to find live-streams of child harm.
"While there are a couple of bright spots, basically, most of the big ones are not lifting their game when it comes to the most heinous crimes against children," Commissioner Julie Inman Grant told ABC radio on Wednesday.
The latest report revealed Apple and Google's YouTube weren't tracking the number of user reports about child sexual abuse, nor could they say how long it took to respond to the allegations.
The companies also didn't provide the number of trust and safety staff.
The US National Centre for Missing and Exploited Children suggests there were tip-offs about more than 18 million unique images and eight million videos of online sexual abuse in 2022.
"What worries me is when companies say, 'We can't tell you how many reports we've received' ... that's bollocks, they've got the technology," Ms Inman Grant said.
"What's happening is we're seeing a winding back of content moderation and trust and safety policies and an evisceration of trust and safety teams, so they're de-investing rather than re-upping."
It comes as YouTube has been arguing against being included in a social media ban for under-16-year-olds on the basis that it is not a social media platform but rather is often used as an educational resource.
The watchdog commissioner had recommended YouTube be included based on research that showed children were exposed to harmful content on the platform more than on any other.
Meanwhile, other findings in the new report include that none of the giants had deployed tools to detect child sexual exploitation livestreaming on their services, three years after the watchdog first raised the alarm.
A tool called hash matching, where copies of previously identified sexual abuse material can be detected on other platforms, wasn't being used by most of the companies, and they are failing to use resources to detect grooming or sexual extortion.
There were a few positive improvements with Discord, Microsoft and WhatsApp generally increasing hash-matching tools and the number of sources to inform that technology.
"While we welcome these improvements, more can and should be done," Ms Inman Grant said.
This report is part of legislation passed last year that legally enforces periodic transparency notices to tech companies, meaning they must report to the watchdog every six months for two years about tackling child sexual abuse material.
The second report will be available in early 2026.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Perth Now
2 hours ago
- Perth Now
Big call as Aussie artists sound AI alarm
Australia alone cannot protect the country's content makers from data-ravenous artificial intelligence and must work with other countries, opposition finance spokesman James Paterson says. The warning comes as artists sound the alarm over AI models feeding off their works to continue advancing. AI itself cannot produce original content, so it must be trained using whatever humans have created. It also scrapes as much data from as many sources as it can to answer queries, including news organisations. The models are usually monetised by their big tech backers, such as OpenAI, Google and Meta, without compensation to people doing the heavy lifting. Echoing Productivity Commission recommendations, Senator Paterson said on Thursday the answer was not more regulation. Opposition finance spokesman James Paterson says Australia should work with other countries to protect content makers' copyright. David Crosling / NewsWire Credit: News Corp Australia 'I think we need to be mindful of the fact that Australia is a medium power, that we're not the main source of innovation and development in this industry, that most of these companies are headquartered overseas, either in Silicon Valley, or indeed in China, or elsewhere in the world, and our ability to regulate them is limited,' Senator Paterson told Sky News. 'I think it would be far better if we work with governments of aligned thinking to work on this collectively in a way that is enforceable against these companies, which are enormous and which have their own intellectual property which they can quickly move from jurisdiction to jurisdiction to find the most attractive settings and the most attractive policy settings that allow them to flourish and to innovate. 'And if Australia just becomes a hostile jurisdiction to that, then we don't win from that, we only lose from it.' Australian rock legend Peter Garrett is among the most vocal Australian artists pushing for tighter rules on AI. He said the idea of tech firms using his music to train AI without his consent was 'horrifying' and called it a 'massive breach of copyright'. Garrett has also advocated for 'robust laws to ensure copyright holders are adequately remunerated, licenses applied and transparency around the actual processes used when a creator's work is exploited'. Australian rock legend Peter Garrett says training artificial intelligence on music without artists' consent is a 'massive breach of copyright'. Dean Sewell / NewsWire Credit: News Corp Australia Australia has managed to strike deals with American tech titans to compensate media companies for content shared on their social media platforms, so there is precedent. But AI is still very much an emerging technology and most governments and businesses have not worked out an economic road map for it. To make matters more complex, non-Western countries have proven capable of producing competitive alternatives, such as China's DeepSeek. China is a notoriously difficult country to enforce copyright. In a report released this week, the Productivity Commission warned against taking a 'heavy-handed' approach to AI regulation, saying to do so could stifle innovation and cause Australia to fall behind other countries. Instead, it recommended making existing regulations fit-for-purpose. That included plugging gaps around consumer protection, privacy, and copyright. The commission said AI-specific regulation should only be considered as a 'last resort' for specific use cases where existing laws were clearly insufficient to mitigate harms. It also called for a pause on mandatory 'guardrails' for high-risk AI until the reviews of existing regulations were complete.

AU Financial Review
3 hours ago
- AU Financial Review
Trump threatens 100pc chips tariff, but zero for investors like Apple
Washington | Donald Trump said he would impose a 100 per cent tariff on semiconductor imports, though would exempt companies moving production back to the United States, as Apple CEO Tim Cook and the president announced a fresh $US100 billion ($154 billion) investment plan from the Oval Office. 'We're going to be putting a very large tariff on chips and semiconductors, but the good news for companies like Apple is, if you're building in the United States, or have committed to build, without question, committed to build in the United States, there will be no charge,' Trump told reporters.

Sky News AU
3 hours ago
- Sky News AU
Ethereum the most popular category of ETFs in Australia after 57 per cent surge in July, while Bitcoin rises by 12 per cent
Bitcoin's growth was eclipsed by rival decentralised blockchain Ethereum in July as enthusiasm for exchange-traded funds surged among Australian traders. Global X on Thursday said it had posted its strongest month of flows on record in July, attracting $370 million in net inflows, with a significant amount going to US assets. The company's Senior Product and Investment Strategist Marc Jocum said confidence in US exceptionalism and artificial intelligence were two of the trend's key drivers. Ethereum, one of Bitcoin's main rivals, grew by 57 per cent in July, making it the best performing ETF on the Australian market. Bitcoin rose by 12 per cent during the same time period. In the US in July, Bitcoin ETFs experienced $6.3 billion in flows. Ethereum ETFs had flows of $5.5 billion.