logo
YouTube and Apple among those still 'turning a blind eye' to child abuse material, eSafety commissioner finds

YouTube and Apple among those still 'turning a blind eye' to child abuse material, eSafety commissioner finds

In a report released on Wednesday, the eSafety Commissioner said YouTube, along with Apple, failed to track the number of user reports it received of child sex abuse appearing on their platforms and also could not say how long it took them to respond to such reports.
The Australian government decided last week to include YouTube in its world-first social media ban for teenagers, following eSafety's advice to overturn its planned exemption for the Alphabet-owned Google's video-sharing site.
'When left to their own devices, these companies aren't prioritising the protection of children and are seemingly turning a blind eye to crimes occurring on their services,' eSafety Commissioner Julie Inman Grant said in a statement.
'No other consumer-facing industry would be given the licence to operate by enabling such heinous crimes against children on their premises, or services.'
A Google spokesperson said 'eSafety's comments are rooted in reporting metrics, not online safety performance', adding that YouTube's systems proactively removed over 99pc of all abuse content before being flagged or viewed.
'Our focus remains on outcomes and detecting and removing (child sexual exploitation and abuse) on YouTube,' the spokesperson said in a statement.
Meta - owner of Facebook, Instagram and Threads, three of the biggest platforms with more than three billion users worldwide - has said it prohibits graphic videos.
The eSafety Commissioner, an office set up to protect internet users, has mandated Apple, Discord, Google, Meta, Microsoft, Skype, Snap and WhatsApp to report on the measures they take to address child exploitation and abuse material in Australia.
The report on their responses so far found a 'range of safety deficiencies on their services which increases the risk that child sexual exploitation and abuse material and activity appear on the services'.
Safety gaps included failures to detect and prevent livestreaming of the material or block links to known child abuse material, as well as inadequate reporting mechanisms.
It said platforms were also not using 'hash-matching' technology on all parts of their services to identify images of child sexual abuse by checking them against a database. Google has said before that its anti-abuse measures include hash-matching technology and artificial intelligence.
The Australian regulator said some providers had not made improvements to address these safety gaps on their services despite it putting them on notice in previous years.
'In the case of Apple services and Google's YouTube, they didn't even answer our questions about how many user reports they received about child sexual abuse on their services or details of how many trust and safety personnel Apple and Google have on-staff,' Inman Grant said.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Steve Dempsey: Big Tech bleating about EU AI rules little more than a fear of a basic level of oversight and respect for copyright
Steve Dempsey: Big Tech bleating about EU AI rules little more than a fear of a basic level of oversight and respect for copyright

Irish Independent

time2 hours ago

  • Irish Independent

Steve Dempsey: Big Tech bleating about EU AI rules little more than a fear of a basic level of oversight and respect for copyright

At the heart of much of the discussion here is whether the need for AI innovation trumps existing copyright laws. The US sees itself in an AI race with China, while Europe has been more focused on protecting citizens and existing rights. The European Commission recently published implementation guidelines relating to the EU AI act. These include details of legal obligations for the safe use of AI, copyright protections for creators, and transparency rules around how AI models are trained. As Europe has a track record of creating de facto rules for the West around tech legislation, it's worth understanding how these implementation guidelines have been greeted. Last week a consortium that represents rights holders from across the media, music, film & TV, books and publishing and art worlds came out against the AI guidelines. Ironically, there isn't a creative name among the host of acronyms representing the creative industries. There's AEPO-ARTIS, BIEM, CISAC, ECSA, FIM, GESAC, ICMP, IMPALA and more. Their point is clear, though. In an open letter, they claim that the European Commission's official guidance on the copyright and transparency obligations contained in the EU AI Act favours tech companies over creators and copyright owners. Their concern is that the new AI regulations will solely benefit the AI companies that scrape their copyrighted content without permission to build and train models. The letter says: 'We are contending with the seriously detrimental situation of generative AI companies taking our content without authorisation on an industrial scale in order to develop their AI models. Their actions result in illegal commercial gains and unfair competitive advantages for their AI models, services, and products, in violation of European copyright laws.' Big tech, which seems to have more lobbying muscle than coding muscle these days, is not presenting a unified front. Google has said it will sign the EU's AI code of practice but warned that the Act and the Code could make Europe an AI laggard. Kent Walker, president of global affairs and chief legal officer at Google's parent company Alphabet, ominously warned: 'Departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe's competitiveness.' OpenAI and the French artificial intelligence company, Mistral are also onboard. And Microsoft will more than likely sign too. But Meta, Facebook's parent company, is against the code. They believe it introduces a number of legal uncertainties for model developers and measures that go beyond the scope of the AI Act. Like Google, they're warning that this will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them. Facebook knows all about how to use FOMO. And it's working. There's been another open letter, this time from the chief executives of large European companies, including Airbus and BNP Paribas, urging a two-year pause by Brussels and warning that unclear and overlapping regulations were threatening the bloc's competitiveness in the global AI race. With all these talking heads, commercial imperative and AI hype cycle, it's easy to forget what all this hot air is about. The issue here is Article 53 of the AI act that introduces transparency into the heart of general-purpose AI model deployment. This article stipulates that AI providers must create and maintain detailed technical documentation covering the AI model's design, development, training data, evaluation, testing, intended tasks, architecture, licensing and energy metrics. All of this must be available to the EU AI Office and national authorities on request. It also must be available in relation to any other downstream systems that integrate the model in question. Article 53 also ensures model providers adhere to EU copyright law and must publicly publish a detailed summary of the training data used. This aims to shed light on datasets, sources, and potential inclusion of copyrighted material. So really, all this quibbling boils down to a level of transparency, societal oversight and a respect for copyright. It's understandable that technology companies are bristling. China isn't tying itself up in this level of bureaucracy, right? The EU's history with tech regulation, such as the GDPR, have often set up roadblocks for users rather than truly protecting privacy. And there's a significant opportunity cost to complying with this level of oversight. How is big tech supposed to move fast and break things with European technocrats looking over their shoulders? But then again, maybe that's the point. When it comes to a technology that might take all our jobs or wipe us all out – depending on who you talk to – maybe a bit of technocratic oversight isn't a bad thing? We know from recent history what happens if Silicon Valley's needs are put ahead of society's. Perhaps the artists and creators who have warned against favouring big tech capital over copyright aren't just protecting their own livelihoods. They're doing us all a favour.

I tested Google Veo 3 - this AI video generation tool is scarily good
I tested Google Veo 3 - this AI video generation tool is scarily good

Irish Daily Mirror

timea day ago

  • Irish Daily Mirror

I tested Google Veo 3 - this AI video generation tool is scarily good

Google AI Pro recently introduced one of its most exciting features - the advanced video-creation tool Veo 3. The monthly subscription service costs €22 in Ireland and for that you get 1,000 AI credits (to make videos) along with other features such as 2TB of Google Storage and Nest security camera storage. Veo 3 creates shareable eight-second video clips from just text prompts and these clips include ambient audio, sound effects and on screen dialogue. READ MORE: DJI Mic Mini review: is this the best compact wireless microphone set-up for most people? READ MORE: Redmagic Astra gaming tablet review: ultra-portable Android device is a powerhouse for gamers Each video clip takes one to two minutes to be generated, and from my experience the more detail is included in your instructions the better the resulting video will be. You can be specific about the accompanying soundtrack too. One caveat is the strict Google rules that forbid offensive content creation of deepfake videos of known celebrities or politicians. Occasionally Veo 3 makes mistakes - I asked for dolphins leaping from the sea in one clip but they looked like they were jumping from the sandy shore instead. But overall I am super impressed by the ultra-realistic cinematic content that Veo 3 can create just from text prompts. It's so good already that it is scary - imagine where this tech will be a couple of years from now. In this clip, dolphins are leaping from the sandy shore and not the sea - but such Veo 3 errors were rare in my testing (Image: Mark Kavanagh) One downside is that the clips are currently only created in the 16:9 aspect ratio that is not ideal for social media. They are also only standard HD resolution and are not full HD or 4K in resolution. You can download your clips as shareable mp4 files. I would guess that Image to Video will be added to Veo 3 soon as this video-generation party trick, which works on any photo in your library, has already been introduced on Honor's 400 Pro smartphone. If you want to try Google AI Pro before you buy, there's a one-month free trial available. A Google Pixel 10 series smartphone (Image: Google) Meanwhile, Google recently confirmed the exact date of its next Made by Google event in Brooklyn, New York on August 20, 2025, and it has told us it will 'introduce the latest additions to our Pixel Portfolio of devices'. The firm also unveiled a promo video which shows one of the expected (but not confirmed) three non-folding phones - Pixel 10, Pixel 10 Pro and Pixel 10 Pro XL. Google is also expected to deliver a new foldable handset following the huge sales success and widespread acclaim it received for Pixel 9 Pro Fold. It will have to pull out all the stops to beat the almost universal praise already heaped on Samsung's Galaxy Z Fold 7 which recently went on sale in Ireland. Subscribe to our newsletter for the latest news from the Irish Mirror direct to your inbox: Sign up here.

Britain's most stolen phones in 2025 revealed – with the SAME brand being a top choice for thieves
Britain's most stolen phones in 2025 revealed – with the SAME brand being a top choice for thieves

The Irish Sun

timea day ago

  • The Irish Sun

Britain's most stolen phones in 2025 revealed – with the SAME brand being a top choice for thieves

Plus what to do if your phone is stolen GADGET GRAB Britain's most stolen phones in 2025 revealed – with the SAME brand being a top choice for thieves Britain's most stolen phones of 2025 have been revealed with the same brand being a top choice for thieves. Figures show that phone thefts in Britain peak over the summer months coinciding with travel, festivals and shopping. 2 Britain's most stolen phones of 2025 have been revealed Credit: PA 2 It comes amid a phone theft epidemic in the UK Credit: AFP Nearly two in five mobiles stolen across Europe are taken in the UK despite making up less than 10 per cent of European customers. The figures come from SquareTrade, an American company that offers gadget insurance across Europe. They told a shocking story with a 425 per cent increase since June 2021. And the latest figures from the Crime Survey for England and Wales show that "theft from the person" increased 50 per cent in 2024. Topping the list of the most stolen phone is the iPhone 15 Pro Max. Released in 2023, The Sun described it at the time as the "new crowning jewel" of Apple's smartphone empire. It was the first time the tech giant switched to a titanium design compared to a stainless steel frame. The device also came with an Action Button for the first time after ditching the Mute Switch as well as a the super fast A17 Pro chip. It was also fitted with a 48-megapixel quad-pixel camera that delivers extremely detailed photos with particular improvements to low-light photography as well as an increased optical zoom. And despite being nearly two years old, the official Apple store still sells the devices from £800 down from £1,199 when they were initially released. Met Police seize 1,000 stolen phones in a week and arrest 230 people With 80 per cent of stolen devices in the UK being iPhones it's no surprise that second on the list is the iPhone 16 Pro Max. The newest Apple model on the market, The Sun tried the iPhone 16 for a week in September 2024. Out of the new iPhone 16 line-up the Pro Max easily came out on top. It has the biggest screen of any iPhone ever with a whopping 6.9-inch display and can currently be purchased from the Apple store for £1,199. For such a lofty sum of money, users get an upgraded A18 Pro Chip which powers a whole host of behind-the-scenes AI features. And there's a new 48-megapixel Fusion camera that's capable of shooing 4K video in Dolby Vision at a staggeringly high 120 frames per second. But the thing that impressed most by far was the ludicrous battery life with the workhorse device boasting 33 hours of video playback before it runs out. Coming in at third place on the list is the Samsung S24 Ultra, released at the beginning of 2024. Heavy on AI features, it even features the technology in its numerous high quality cameras. The device's stunning night-defying camera, hefty battery and solid premium style beat the iPhone in our review of the phone. This is perhaps reflected in the price with a £1,250 price tag at the time of release. It's not surprising that the three most stolen devices all cost more than £1,000. But beyond the cost of the device, criminals are also targeting phones for access to sensitive data, including banking apps, crypto wallets and personal accounts. London also lies at the centre of the phone theft epidemic according to the insurance claims data. Just this week giant signs were painted on Oxford street to warn Londoners to get off their phones amid record high snatches.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store