logo
#

Latest news with #Ray-Hill

AI-generated child sex abuse videos 'now as lifelike as real footage'
AI-generated child sex abuse videos 'now as lifelike as real footage'

STV News

time11-07-2025

  • STV News

AI-generated child sex abuse videos 'now as lifelike as real footage'

AI-generated videos of child sexual abuse have skyrocketed in numbers and are now 'indistinguishable' from real footage, a charity has warned. The Internet Watch Foundation (IWF), which finds and helps remove abuse imagery online, said criminals were creating more realistic and more extreme sexual abuse content – and could soon be able to make and share feature-length films of the material. Highly realistic videos of abuse are no longer confined to short, glitch-filled clips that were previously common with the technology, with perpetrators now using AI to produce videos that often include the likenesses of real children on a large scale. Some 1,286 individual AI-generated child sexual abuse videos were discovered in the first half of this year, according to new IWF data published on Friday. Only two such videos were discovered over the same period last year. All of the confirmed videos so far in 2025 have been so convincing that they had to be treated under UK law exactly as if they were genuine footage, the IWF said. More than 1,000 of the videos were assessed as Category A imagery, the most extreme – which can include depictions of rape, sexual torture and bestiality. The data also showed that AI-generated child sexual abuse imagery was discovered on 210 separate webpages in the first half of this year, compared to 42 webpages in 2024, while confirmed reports of the images to the charity had risen by 400%. Each webpage can contain multiple images or videos. The figures come after the IWF previously said 291,273 reports of child sexual abuse imagery were reported last year. The charity has called on the Government to ensure the safe development and use of AI models by introducing binding regulation that ensures the technology's design is unable to be abused. Derek Ray-Hill, interim chief executive of the IWF, said: 'We must do all we can to prevent a flood of synthetic and partially synthetic content joining the already record quantities of child sexual abuse we are battling online. 'I am dismayed to see the technology continues to develop at pace, and that it continues to be abused in new and unsettling ways. 'Just as we saw with still images, AI videos of child sexual abuse have now reached the point they can be indistinguishable from genuine films. 'The children being depicted are often real and recognisable, the harm this material does is real, and the threat it poses threatens to escalate even further.' Mr Ray-Hill said the Government 'must get a grip' on the issue as it was currently 'just too easy' for criminals to produce the videos, and that feature-length AI-generated child sexual abuse films of real children were inevitable. He added: 'The Prime Minister only recently pledged that the Government will ensure tech can create a better future for children. Any delays only set back efforts to safeguard children and deliver on the Government's pledge to halve violence against girls. 'Our analysts tell us nearly all this AI abuse imagery features girls. It is clear this is yet another way girls are being targeted and endangered online.' An anonymous senior analyst at the IWF said AI child sexual abuse imagery creators had video quality that was 'leaps and bounds ahead' of what was available last year. 'The first AI child sexual abuse videos we saw were deepfakes – a known victim's face put onto an actor in an existing adult pornographic video. It wasn't sophisticated but could still be pretty convincing,' he said. 'The first fully synthetic child sexual abuse video we saw at the beginning of last year was just a series of jerky images put together, nothing convincing. 'But now they have really turned a corner. The quality is alarmingly high, and the categories of offence depicted are becoming more extreme as the tools improve in their ability to generate video showing two or more people. 'The videos also include sets showing known victims in new scenarios.' The IWF has advised the public to report images and videos of child sexual abuse to the charity anonymously and only once, including the exact URL where the content is located. Safeguarding minister Jess Phillips said: 'These statistics are utterly horrific. Those who commit these crimes are just as disgusting as those who pose a threat to children in real life. 'AI-generated child sexual abuse material is a serious crime, which is why we have introduced two new laws to crack down on this vile material. 'Soon, perpetrators who own the tools that generate the material or manuals teaching them to manipulate legitimate AI tools will face longer jail sentences and we will continue to work with regulators to protect more children.' An anonymous senior analyst at the IWF said AI child sexual abuse imagery creators had video quality that was 'leaps and bounds ahead' of what was available last year. 'The first AI child sexual abuse videos we saw were deepfakes – a known victim's face put onto an actor in an existing adult pornographic video. It wasn't sophisticated but could still be pretty convincing,' he said. 'The first fully synthetic child sexual abuse video we saw at the beginning of last year was just a series of jerky images put together, nothing convincing. 'But now they have really turned a corner. The quality is alarmingly high, and the categories of offence depicted are becoming more extreme as the tools improve in their ability to generate video showing two or more people. 'The videos also include sets showing known victims in new scenarios.' The IWF has advised the public to report images and videos of child sexual abuse to the charity anonymously and only once, including the exact URL where the content is located. Get all the latest news from around the country Follow STV News Scan the QR code on your mobile device for all the latest news from around the country

IWF: 291,273 reports of child sexual abuse imagery reported in 2024
IWF: 291,273 reports of child sexual abuse imagery reported in 2024

Observer

time23-04-2025

  • Observer

IWF: 291,273 reports of child sexual abuse imagery reported in 2024

Record levels of web pages hosting child sexual abuse imagery were discovered in 2024, the Internet Watch Foundation (IWF), a UK online safety organization, has said. The IWF, which finds and helps remove abuse imagery online, said 291,273 reports of child sexual abuse imagery were reported in 2024. In its annual report, the organization said it was seeing rising numbers of cases being driven by threats, including AI-generated sexual abuse content, sextortion and the malicious sharing of nude or sexual imagery. It said under-18s were now facing a crisis of sexual exploitation and risk online. In response, the IWF announced it was making a new safety tool available to smaller websites for free, to help them spot and prevent the spread of abuse material on their platforms. The tool, known as Image Intercept, can spot and block images in the IWF's database of more than 2.8 million which have been digitally marked as criminal imagery. The IWF said it will give wide swathes of the internet new, 24-hour protection, and help smaller firms comply with the Online Safety Act. The Online Safety Act began coming into effect last month, and requires platforms to follow new codes of practice, set by the regulator Of com, in order to keep users safe online. Derek Ray-Hill, interim chief executive at the IWF, said: "Young people are facing rising threats online where they risk sexual exploitation, and where images and videos of that exploitation can spread like wildfire." "New threats like AI and sexually coerced extortion are only making things more dangerous," he said, adding that many platforms do not have the resources to protect their sites against people who deliberately upload child sexual abuse material. "That is why we have taken the initiative to help these operators create safer online spaces by providing a free-of-charge hash checking service that will identify known criminal content," Ray-Hill stated. "This is a major moment in online safety, and anyone with an online platform where users can upload content now has the chance to jointhe fight and help deliver the aspirations of the Online Safety Act,"he added. The IWF said Image Intercept had received funding support from the Home Office to aid with its creation. UK Technology Secretary Peter Kyle said: "Visiting the IWF earlier this year was one of the most shocking and moving days I have experienced as Technology Secretary." "I saw first hand the scale and sinister methods criminals are using to prey on young people, often beginning in what should be the safest place for a child - their bedroom," he added. "During my visit I also saw the extraordinary dedication of the IWF teams working daily to protect children from further harm and meet these new threats." The IWF's annual report also showed that in 97% of the incidents where the victim's sex was recorded, the material showed the sexual abuse of girls only - which the IWF said was a sharp increase since 2023. Jess Phillips, UK minister for safeguarding and violence against women and girls, said the new Image Intercept tool would be "vital"for protecting children. "The IWF's latest findings are deeply disturbing, and show how urgent it is for us to tackle the growth of online child sexual abuse and intensify our efforts to protect children from these heinous crimes,"she stated. "Their Image Intercept initiative, funded by the Home Office, will be vital in helping to stop the re-victimisation of children who have been exploited online," she said, adding that technology platforms must also be held accountable. "If they are to be safe places for our children, they must invest in technologies that block this harmful content and stop predators being able to access and groom children online." "This Government is going further by introducing new measures so anyone who possesses an AI tool designed to create illicit images or owns manuals teaching them how to do so will rightly face time behind bars," she added.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store