
AI-generated images of child sexual abuse are flooding the Internet
Over the past two years, new AI technologies have made it easier for criminals to create explicit images and videos of children. Now, researchers at organisations including the Internet Watch Foundation and the National Center for Missing & Exploited Children are warning of a surge of new material this year that is nearly indistinguishable from actual abuse.
New data released July 10 from the Internet Watch Foundation, a British nonprofit that investigates and collects reports of child sexual abuse imagery, identified 1,286 AI-generated videos of child sexual abuse so far this year globally, compared with just two in the first half of 2024.
The videos have become smoother and more detailed, the organisation's analysts said, because of improvements in the technology and collaboration among groups on hard-to-reach parts of the Internet called the dark web to produce them.
The rise of lifelike videos adds to an explosion of AI-produced child sexual abuse material, or CSAM. In the United States, the National Center for Missing & Exploited Children said it had received 485,000 reports of AI-generated CSAM, including stills and videos, in the first half of the year, compared with 67,000 for all of 2024.
'It's a canary in the coal mine,' said Derek Ray-Hill, interim CEO of the Internet Watch Foundation. The AI-generated content can contain images of real children alongside fake images, he said, adding, 'There is an absolute tsunami we are seeing.'
The deluge of AI material threatens to make law enforcement's job even harder. While still a tiny fraction of the total amount of child sexual abuse material found online, which tallied reports in the millions, the police have been inundated with requests to investigate AI-generated images, taking away from their pursuit of those engaging in child abuse.
Law enforcement authorities say federal laws against child sexual abuse material and obscenity cover AI-generated images, including content that is wholly created by the technology and do not contain real images of children.
Beyond federal statutes, state legislators have also raced to criminalise AI-generated depictions of child sexual abuse, enacting more than three dozen state laws in recent years.
But courts are only just beginning to grapple with the legal implications, legal experts said.
The new technology stems from generative AI, which exploded onto the scene with OpenAI's introduction of ChatGPT in 2022. Soon after, companies introduced AI image and video generators, prompting law enforcement and child safety groups to warn about safety issues.
Much of the new AI content includes real imagery of child sexual abuse that is reused in new videos and still images. Some of the material uses photos of children scraped from school websites and social media. Images are typically shared among users in forums, via messaging on social media and other online platforms.
In December 2023, researchers at the Stanford Internet Observatory found hundreds of examples of child sexual abuse material in a dataset used in an early version of the image generator Stable Diffusion. Stability AI, which runs Stable Diffusion, said it was not involved in the data training of the model studied by Stanford. It said an outside company had developed that version before Stability AI took over exclusive development of the image generator.
Only in recent months have AI tools become good enough to trick the human eye with an image or video, avoiding some of the previous giveaways like too many fingers on a hand, blurry backgrounds or jerky transitions between video frames.
The Internet Watch Foundation found examples last month of individuals in an underground web forum praising the latest technology, where they remarked on how realistic a new cache of AI-generated child sexual abuse videos were. They pointed out how the videos ran smoothly, contained detailed backgrounds with paintings on walls and furniture, and depicted multiple individuals engaged in violent and illegal acts against minors.
About 35 tech companies now report AI-generated images of child sexual abuse to the National Center for Missing & Exploited Children, said John Shehan, a senior official with the group, although some are uneven in their approach. The companies filing the most reports typically are more proactive in finding and reporting images of child sexual abuse, he said.
Amazon, which offers AI tools via its cloud computing service, reported 380,000 incidents of AI-generated child sexual abuse material in the first half of the year, which it took down. OpenAI reported 75,000 cases. Stability AI reported fewer than 30.
Stability AI said it had introduced safeguards to enhance its safety standards and 'is deeply committed to preventing the misuse of our technology, particularly in the creation and dissemination of harmful content, including CSAM.'
Amazon and OpenAI, when asked to comment, pointed to reports they posted online that explained their efforts to detect and report child sexual abuse material.
Some criminal networks are using AI to create sexually explicit images of minors and then blackmail the children, said a Department of Justice official, who requested anonymity to discuss private investigations. Other children use apps that take images of real people and disrobe them, creating what is known as a deepfake nude.
Although sexual abuse images containing real children are clearly illegal, the law is still evolving on materials generated fully by artificial intelligence, some legal scholars said.
In March, a Wisconsin man who was accused by the Justice Department of illegally creating, distributing and possessing fully synthetic images of child sexual abuse successfully challenged one of the charges against him on First Amendment grounds. Judge James Peterson of US District Court for the Western District of Wisconsin said that 'the First Amendment generally protects the right to possess obscene material in the home' so long as it isn't 'actual child pornography.'
But the trial will move forward on the other charges, which relate to the production and distribution of 13,000 images created with an image generator. The man tried to share images with a minor on Instagram, which reported him, according to federal prosecutors.
'The Department of Justice views all forms of AI-generated CSAM as a serious and emerging threat,' said Matt Galeotti, head of the Justice Department's criminal division. – ©2025 The New York Times Company
This article originally appeared in The New York Times.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New Straits Times
23 minutes ago
- New Straits Times
Jeffrey Epstein's legal saga and political fallout shakes US
JEFFREY Epstein, the man at the center of a conspiracy theory creating political headwinds for President Donald Trump, was facing federal charges of sex trafficking underage girls when he was found dead in his New York prison cell. Six years later, the death of the wealthy and well-connected financier continues to reverberate, leaving major questions unanswered. Epstein's first serious brush with the law came in 2006 after the parents of a 14-year-old told police he had molested their daughter at his Florida estate. Epstein avoided federal charges – which could have seen him face life in prison – through a controversial plea deal with prosecutors. In June 2008, he pled guilty to state felony charges of procuring a person under the age of 18 for prostitution and solicitation of prostitution. He was jailed for just under 13 months and required to register as a sex offender. A federal grand jury in New York charged Epstein on July 2, 2019 with two felony counts: conspiracy to commit sex trafficking of minor girls and sex trafficking of minor girls. He was arrested four days later but was found dead in his prison cell on August 10, before the case came to trial. The grand jury indictment accused Epstein of having "sexually exploited and abused dozens of minor girls," some as young as 14, at his Manhattan mansion and Palm Beach estate. Epstein and employees and associates recruited girls "to engage in sex acts with him, after which he would give the victims hundreds of dollars in cash," it said. Epstein also paid his victims to provide him with other girls, the indictment said, creating a "vast network of underage victims for him to sexually exploit." The indictment did not name the employees or associates who recruited girls for Epstein. But British socialite Ghislaine Maxwell, Epstein's one-time girlfriend and assistant, was convicted in 2021 in New York with sex trafficking of minors on his behalf. Maxwell is serving a 20-year prison sentence. Trump's conspiracy-minded supporters have been obsessed with the Epstein case for years and Trump, during his latest presidential campaign, said he would "probably" release what have come to be known as the "Epstein files." They were outraged when the Justice Department and FBI announced on July 7 that Epstein had indeed committed suicide, did not blackmail any prominent figures and did not keep a "client list." The "exhaustive review" also did not reveal any illegal wrongdoing by "third-parties," the joint memo said, adding that there would be no further disclosure of information about the case. The memo sparked a fierce backlash from Trump's "Make America Great Again" movement – which has long held as an article of faith that "Deep State" elites were protecting powerful associates in the Democratic Party and Hollywood. Right-wing supporters typically did not include former Epstein friend Trump in their conspiracy theories. The 79-year-old Trump, who was friends with Epstein in the 1990s and early 2000s, has been seeking – unsuccessfully so far – to tamp down the uproar caused by the FBI memo putting a lid on the case. No evidence has emerged of any wrongdoing by Trump but The Wall Street Journal published details Thursday of a raunchy letter he purportedly sent Epstein in 2003 to mark his 50th birthday. The president on Friday sued Dow Jones, News Corp, two Wall Street Journal reporters and the newspaper's owner, Rupert Murdoch, for libel and slander in relation to the article. Trump also ordered Attorney General Pam Bondi to seek the release of the grand jury testimony in Epstein's New York case. In a filing in New York, Bondi cited "extensive public interest" for the unusual request to release what is typically secret testimony.


New Straits Times
8 hours ago
- New Straits Times
French riot police clash with migrants in coastal town
LONDON: Clashes broke out between French riot police and migrants in the early hours of the morning in a northern coastal town, PA Media/dpa reported. Pictures and footage captured by the PA news agency showed a group throwing rocks from afar in the direction of the officers early on Friday, while small fires were alight in the road near a park in Gravelines. The scene between the men - two seen by PA wearing life jackets - and the Gendarmerie and Police Nationale officers, who were equipped with shields, helmets and tear gas, lasted for about 20 minutes at around 5:30am (0330 GMT). It came after migrants were filmed running into the water and boarding a dinghy at Gravelines beach on Thursday morning. More than 22,500 people have arrived in the UK after crossing the English Channel so far in 2025, a record for this point in the year. Last week, British Prime Minister Keir Starmer and French President Emmanuel Macron agreed a one in, one out migrant returns deal in a bid to crack down on the crossings and the people smuggling gangs who operate them. Starmer met with German Chancellor Friedrich Merz on Thursday where he also praised Berlin's plans to strengthen laws to disrupt small boat crossings by the end of the year. — BERNAMA


New Straits Times
9 hours ago
- New Straits Times
AI-powered 'nudify' apps fuel deadly wave of digital blackmail
AFTER a Kentucky teenager died by suicide this year, his parents discovered he had received threatening texts demanding US$3,000 to suppress an artificial intelligence (AI)-generated nude image of him. The tragedy underscores how so-called sextortion scams targeting children are growing around the world, particularly with the rapid proliferation of "nudify" apps — AI tools that digitally strip off clothing or generate sexualised imagery. Elijah Heacock, 16, was just one of thousands of American minors targeted by such digital blackmail, which has spurred calls for more action from tech platforms and regulators. His parents told US media that the text messages ordered him to pay up or an apparently AI-generated nude photo would be sent to his family and friends. "The people that are after our children are well organised," said John Burnett, the boy's father, in a CBS News interview. "They are well financed, and they are relentless. They don't need the photos to be real, they can generate whatever they want, and then they use it to blackmail the child." United States investigators were looking into the case, which comes as nudify apps — which rose to prominence targeting celebrities — are being increasingly weaponised against children. The Federal Bureau of Investigation has reported a "horrific increase" in sextortion cases targeting US minors, with victims typically males between the ages of 14 and 17. The threat has led to an "alarming number of suicides", the agency warned. In a recent survey, Thorn, a non-profit focused on preventing online child exploitation, found that six per cent of American teens had been a direct victim of deepfake nudes. "Reports of fakes and deepfakes — many of which are generated using these 'nudifying' services — seem to be closely linked with reports of financial sextortion, or blackmail with sexually explicit images," said the British watchdog Internet Watch Foundation in a report last year. "Perpetrators no longer need to source intimate images from children because images that are convincing enough to be harmful — maybe even as harmful as real images in some cases — can be produced using generative AI." The IWF identified one "pedophile guide" developed by predators that explicitly encouraged perpetrators to use nudifying tools to generate material to blackmail children. The author of the guide claimed to have successfully blackmailed some 13-year-old girls. The tools are a lucrative business. A new analysis of 85 websites selling nudify services found they may be collectively worth up to US$36 million a year. The analysis from Indicator, a US publication investigating digital deception, estimates that 18 of the sites made between US$2.6 million and US$18.4 million over the six months to May. Most of the sites relied on tech infrastructure from Google, Amazon and Cloudflare to operate, and remained profitable despite crackdowns by platforms and regulators, said Indicator. The proliferation of AI tools has led to new forms of abuse impacting children, including pornography scandals at universities and schools worldwide, where teenagers created sexualised images of their classmates. A recent Save the Children survey found that one in five young people in Spain have been victims of deepfake nudes, with those images shared online without their consent. Earlier this year, Spanish prosecutors said they were investigating three minors in the town of Puertollano for allegedly targeting their classmates and teachers with AI-generated pornographic content and distributing it in their school. In the United Kingdom, the government this year made creating sexually explicit deepfakes a criminal offence, with perpetrators facing up to two years in jail. And in May, US President Donald Trump signed the bipartisan "Take It Down Act", which criminalises the non-consensual publication of intimate images, while also mandating their removal from online platforms. Meta also recently announced it was filing a suit against a Hong Kong company behind a nudify app called Crush AI, which it said repeatedly circumvented the tech giant's rules to post ads on its platforms. But despite such measures, researchers say AI nudifying sites remain resilient. "To date, the fight against AI nudifiers has been a game of whack-a-mole," said Indicator, calling the apps and sites "persistent and malicious adversaries".