logo
#

Latest news with #paedophiles

Children's video game branded ‘perfect place for paedophiles'
Children's video game branded ‘perfect place for paedophiles'

Telegraph

time2 days ago

  • Telegraph

Children's video game branded ‘perfect place for paedophiles'

A video game played by millions of children in the UK is the 'perfect place for paedophiles' and 'fails to implement basic safety controls', a US state lawsuit has claimed. Roblox has become a 'breeding ground for sex predators' where child abusers 'thrive, unite, hunt and victimise our kids', according to Liz Murrill, the Republican attorney general of Louisiana. The lawsuit alleged that 'far from creating a safe place for children' Roblox, which has an age rating of seven in the UK, had 'provided a perfect place for paedophiles' that 'lacks safety features to protect children from predators and lacks warnings for parents and child users'. The video game, which is played by an estimated 61pc of British children aged between eight and 14, allows children to build virtual worlds and customised games that they can share with others. It has more than 380 million players globally and is one of the world's most popular video games among children, allowing children under the age of 13. It has almost 5 million players in the UK alone. However Roblox – which can be played on consoles, PCs and smartphones – has long faced claims that abusers have targeted the game to attempt to contact children. According to the lawsuit, for many years most of Roblox's parental controls within the game were off by default, even when the account was linked to a child. Child users could easily chat to others with no parental oversight and potentially be directed away from the game onto other messaging apps. Its settings allowed adults to 'easily communicate with children ... creating a virtual world where predators can freely target and groom children', the lawsuit claimed. Children were also able to access games with adult themes, according to a report by an investment firm last year. These included user-generated games such as 'Escape to Epstein Island'. Roblox, which was founded in 2004 and is now valued at more than $80bn (£59bn), tightened up many of its parental settings in November last year after a series of reports detailed instances of predators targeting children in the game. Children under the age of 13 now need parental permission to access most of its messaging features and parents can add screen time limits or monitor friend lists. The Louisiana lawsuit alleged that, in July, a man suspected of possessing child-abuse material was arrested while 'actively using' Roblox and a 'voice-alerting technology designed to mimic the voice of a young female'. A Roblox spokesman said: 'We can't comment on pending litigation. But we dedicate substantial resources, including advanced technology and 24/7 human moderation, to help detect and prevent inappropriate content and behaviour, including attempts to direct users off platform, where safety standards and moderation may be less stringent than ours. 'While no system is perfect, Roblox has implemented rigorous technology and enforcement safeguards, including restrictions on sharing personal information, links, and user-to-user image sharing. The safety of our community is a top priority.' In July, Roblox added additional age verification checks in the UK in response to the Online Safety Act. These require users to undergo an age estimation check to access additional online chat features.

The Online Safety Act censors dissent, while letting paedophiles roam free
The Online Safety Act censors dissent, while letting paedophiles roam free

Telegraph

time3 days ago

  • Telegraph

The Online Safety Act censors dissent, while letting paedophiles roam free

Outrage over the Online Safety Act 's age-verification mandates has tended thus far to focus on amusing, if frustrating, hurdles the law has thrown up for ordinary citizens: issues playing music on streaming services, trouble ordering pizzas and the possibility of Wikipedia going offline for huge chunks of the population. But missing amongst the discussion is a key point. It will fail to protect children because it misdirects attention and focus away from the real problem. Which is that there is far too little investigating, charging, prosecuting and convicting actual paedophiles and child pornographers. I'm not talking about the grooming gangs scandal, though of course that is germane to the discussion. The Online Safety Act has forced censorship of tweets about that particular vile series of incidents, including information that might better equip today's potential victims with information that could enable them to spot grooming behaviour and ultimately stay free of gangs' snares. I'm talking about actual failures to investigate, charge, prosecute and convict those involved in creating, selling, and sharing child sex abuse material where the supposed big, bad guys in the room – the tech companies – have actually alerted the authorities and given them the information they need to arrest abusers and child pornographers. Few policymakers, let alone laymen, are aware that tech giants – the overwhelming majority of which are US incorporated – are required by US law to report instances of child sexual abuse material (CSAM) to the National Center for Missing & Exploited Children's (NCMEC) CyberTipline. Still fewer are likely aware that when a report is made, 'geographic indicators related to the upload location of the CSAM are used to make the report available to the appropriate law enforcement '. This is what that means in practice: the International Centre for Missing & Exploited Children reported that there were 178,648 UK cyber tips made in 2023, overwhelmingly because of reporting by Big Tech firms. Yet Home Office data indicates only 39,640 child sexual abuse image offences in England and Wales in year 2023-2024. That's a small fraction of the volume of CSAM reports made through the CyberTipline. It is true that an apples-to-apples comparison is not 100 per cent feasible. Data in respect of Scotland and Northern Ireland is not included in that 39,640 number. British Transport Police report their data separate to the Home Office. NCMEC compiles its data by calendar year, whereas Home Office data is compiled over a fiscal year. There are a few other wrinkles, too. But the bottom line is, a small proportion of probable child-porn offences in Britain are being investigated by law enforcement, despite tech companies having reported them. If those crimes are not being investigated, the criminals responsible will never be charged, let alone prosecuted, convicted, or imprisoned. And that is a huge problem that no amount of social media regulation will ever fix. What is being done about it? As things stand, the Home Office budget is set to decline by 2.6 per cent by 2028-2029. And that decline comes on top of an already anticipated £1.2bn shortfall in police funding, according to the National Police Chiefs Council. The fiscal picture in Britain seems to look increasingly bleak, and it's hard to believe that Rachel Reeves is going to conjure up more money for policing as opposed to pressing further cuts. It's a mathematical conundrum above most of our pay grades to sort through, but sort through it she must. Abused kids are counting on her doing so. But as societies – whether in Britain, the EU, US or globally – we also need to hold the right actors to account, and place our focus squarely where it should sit. That is law enforcement, and the politicians who determine how much money they will allocate to it, plus which policies our leaders will require police to adhere to in keeping our kids safe. As of right now, that means actually busting child predators, not engaging in misdirection targeting tech firms.

AI-generated child sexual abuse videos surging online, watchdog says
AI-generated child sexual abuse videos surging online, watchdog says

The Guardian

time10-07-2025

  • The Guardian

AI-generated child sexual abuse videos surging online, watchdog says

The number of videos online of child sexual abuse generated by artificial intelligence has surged as paedophiles have pounced on developments in the technology. The Internet Watch Foundation said AI videos of abuse had 'crossed the threshold' of being near-indistinguishable from 'real imagery' and had sharply increased in prevalence online this year. In the first six months of 2025, the UK-based internet safety watchdog verified 1,286 AI-made videos with child sexual abuse material (CSAM) that broke the law, compared with two in the same period last year. The IWF said just over 1,000 of the videos featured category A abuse, the classification for the most severe type of material. The organisation said the multibillion-dollar investment spree in AI was producing widely available video-generation models that were being manipulated by paedophiles. 'It is a very competitive industry. Lots of money is going into it, so unfortunately there is a lot of choice for perpetrators,' said one IWF analyst. The videos were found as part of a 400% increase in URLs featuring AI-made child sexual abuse in the first six months of 2025. The IWF received reports of 210 such URLs, compared with 42 last year, with each webpage featuring hundreds of images, including the surge in video content. The IWF saw one post on a dark web forum where a paedophile referred to the speed of improvements in AI, saying how they had mastered one AI tool only for 'something new and better to come along'. IWF analysts said the images appeared to have been created by taking a freely available basic AI model and 'fine-tuning' it with CSAM in order to produce realistic videos. In some cases these models had been fine-tuned with a handful of CSAM videos, the IWF said. The most realistic AI abuse videos seen this year were based on real-life victims, the watchdog said. Derek Ray-Hill, the IWF's interim chief executive, said the growth in capability of AI models, their wide availability and the ability to adapt them for criminal purposes could lead to an explosion of AI-made CSAM online. 'There is an incredible risk of AI-generated CSAM leading to an absolute explosion that overwhelms the clear web,' he said, adding that a growth in such content could fuel criminal activity linked to child trafficking, child sexual abuse and modern slavery. The use of existing victims of sexual abuse in AI-generated images meant that paedophiles were significantly expanding the volume of CSAM online without having to rely on new victims, he added. The UK government is cracking down on AI-generated CSAM by making it illegal to possess, create or distribute AI tools designed to create abuse content. People found to have breached the new law will face up to five years in jail. Ministers are also outlawing possession of manuals that teach potential offenders how to use AI tools to either make abusive imagery or to help them abuse children. Offenders could face a prison sentence of up to three years. Announcing the changes in February, the home secretary, Yvette Cooper, said it was vital that 'we tackle child sexual abuse online as well as offline'. AI-generated CSAM is illegal under the Protection of Children Act 1978, which criminalises the taking, distribution and possession of an 'indecent photograph or pseudo photograph' of a child.

AI-generated child sexual abuse videos surging online, watchdog says
AI-generated child sexual abuse videos surging online, watchdog says

The Guardian

time10-07-2025

  • The Guardian

AI-generated child sexual abuse videos surging online, watchdog says

The number of videos online of child sexual abuse generated by artificial intelligence has surged as paedophiles have pounced on developments in the technology. The Internet Watch Foundation said AI videos of abuse had 'crossed the threshold' of being near-indistinguishable from 'real imagery' and had sharply increased in prevalence online this year. In the first six months of 2025, the UK-based internet safety watchdog verified 1,286 AI-made videos with child sexual abuse material (CSAM) that broke the law, compared with two in the same period last year. The IWF said just over 1,000 of the videos featured category A abuse, the classification for the most severe type of material. The organisation said the multibillion-dollar investment spree in AI was producing widely available video-generation models that were being manipulated by paedophiles. 'It is a very competitive industry. Lots of money is going into it, so unfortunately there is a lot of choice for perpetrators,' said one IWF analyst. The videos were found as part of a 400% increase in URLs featuring AI-made child sexual abuse in the first six months of 2025. The IWF received reports of 210 such URLs, compared with 42 last year, with each webpage featuring hundreds of images, including the surge in video content. The IWF saw one post on a dark web forum where a paedophile referred to the speed of improvements in AI, saying how they had mastered one AI tool only for 'something new and better to come along'. IWF analysts said the images appeared to have been created by taking a freely available basic AI model and 'fine-tuning' it with CSAM in order to produce realistic videos. In some cases these models had been fine-tuned with a handful of CSAM videos, the IWF said. The most realistic AI abuse videos seen this year were based on real-life victims, the watchdog said. Derek Ray-Hill, the IWF's interim chief executive, said the growth in capability of AI models, their wide availability and the ability to adapt them for criminal purposes could lead to an explosion of AI-made CSAM online. 'There is an incredible risk of AI-generated CSAM leading to an absolute explosion that overwhelms the clear web,' he said, adding that a growth in such content could fuel criminal activity linked to child trafficking, child sexual abuse and modern slavery. The use of existing victims of sexual abuse in AI-generated images meant that paedophiles were significantly expanding the volume of CSAM online without having to rely on new victims, he added. The UK government is cracking down on AI-generated CSAM by making it illegal to possess, create or distribute AI tools designed to create abuse content. People found to have breached the new law will face up to five years in jail. Ministers are also outlawing possession of manuals that teach potential offenders how to use AI tools to either make abusive imagery or to help them abuse children. Offenders could face a prison sentence of up to three years. Announcing the changes in February, the home secretary, Yvette Cooper, said it was vital that 'we tackle child sexual abuse online as well as offline'. AI-generated CSAM is illegal under the Protection of Children Act 1978, which criminalises the taking, distribution and possession of an 'indecent photograph or pseudo photograph' of a child.

Singaporean youth admits luring man on dating app, demanding S$2,000 after posing as minor
Singaporean youth admits luring man on dating app, demanding S$2,000 after posing as minor

Malay Mail

time04-07-2025

  • Malay Mail

Singaporean youth admits luring man on dating app, demanding S$2,000 after posing as minor

SINGAPORE, July 4 — A teenager who pretended to be a minor on dating apps to entrap and extort alleged 'paedophiles' was convicted of extortion in Singapore today. The Straits Times reported that Shaaqir Noor'rifqy Mohammed Noorrizat, 19, admitted in court to conspiring with a younger accomplice to pose as underage individuals on platforms like Grindr and Telegram in order to lure men looking for sex, confront them, and demand money to keep silent. In one incident, a 24-year-old man who believed he was meeting a 15-year-old boy for sex was tricked into turning up at the void deck of a Bukit Batok block of flats around noon on November 6, 2024. When he arrived, Shaaqir and his then-17-year-old accomplice, who cannot be named due to his age, recorded the encounter and confronted him. 'They told him that they would keep this a secret if he paid them, before stopping the recording,' said Assistant Public Prosecutor Chye Jer Yuan. 'The (man) agreed and both accused followed him to withdraw a sum of S$2,000 (RM6,630).' The pair deleted the recording in front of the victim after receiving the money. They were arrested around 11pm that same day. The court heard that Shaaqir had masterminded the scheme. '(Shaaqir) had come up with the idea to impersonate underage females or males on online dating applications to lure paedophiles and extort money from them,' said Chye. Shaaqir pleaded guilty to one count of extortion, with two other similar charges to be considered during sentencing. His case has been adjourned to August while probation and reformative training suitability reports are being prepared. His accomplice's case is still pending. It was not revealed in court how the pair's offences came to light.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store