
Women reveal horror of finding deepfake porn images of themselves on sick site that encouraged men to rape and degrade underage girls as young as six
Dozens of attendees and graduates of the local General Douglas MacArthur High School in Levittown, New York, had gone to the police to reveal that innocent pictures of them on social media were being digitally doctored alongside horrific messages encouraging users to 'rape' and in other ways degrade them.
They were all the more horrified to discover that the perpetrator was a fellow pupil who had grown up alongside the young women, and was at times, even a close friend.
More than 1,300 posts were shared across some 14 different usernames on a sick site that encouraged men to share pictures of themselves masturbating on printed-out images.
In 2023, Patrick Carey, 24, was sentenced to six months in jail, with 10 years probation after admitting to offenses including promotion of a sexual performance by a child, aggravated harassment as a hate crime, stalking and endangering the welfare of a child, the New York Post reported.
And some of the 14 women he was said to have shared content of have now spoken about the trauma of dealing with the harassment years on, on a new Bloomberg podcast titled Levittown, hosted by reporters Olivia Carville and Margi Murphy.
One of the victims, who was identified only as 'Kayla', 24, spoke about the disgust of finding out the photos of her were out there during the 2020 pandemic.
Her father, a policeman was doing a regular Google search of his children when he was shocked to see an apparentl nude photo of his daughter come up.
When he showed Kayla, she was baffled - as the image, in which she was originally dressed, had been manipulated.
A further search on the site saw more altered pictures, taken from her online profiles, with worrying messages alongside, expressing what depraved users wanted to do to her.
'It was "drink her piss", "milk her", "have her drink my piss",' she shared.
'We would see what they posted, like their nude pictures and them … j***ing off and c**ing on our pictures, even like pictures of our pictures with their d*** there and the ejaculation there. And then there was like some like, writings of like, "rape her".'
Soon enough, it became clear that the problem was more widespread, as Kayla was approached by a fellow student at the school known in the podcast as 'Cecilia'.
She too, among several girls that went to General Douglas MacArthur were featured on the site.
Much like had been the case for Kayla, images from their online profiles had been taken to make them seem sexually explicit - including pictures of the women when they were as young as six years old.
In summer 2021, another Levittown local - called 'Kat' in the podcast - recounted one awful experience led her to uncovering Patrick; whose father was a policeman.
She came across a deepfake of herself - originally a sweet snap where she was smiling - made to seem like 'her hands tied behind her back, covered in blood, with a plastic bag over her head'.
In a caption the post - which listed her real name - claimed that 'her body had been found near an abandoned construction site and 'that she'd been raped'.
'I'd had enough. It had to stop. I was like, OK, this is like more serious and I need to know who it is.
It appeared that at this stage rumors were circulating that Patrick was behind the fakes, but Kat - who had known him since she was five - was 'almost defending him in her head'.
'He was smart, so, like, he knew what he was talking about, you know? But anybody who said anything to him, it's just like, OK, I don't care. I'm smarter than anybody…
'That's when I went on the website. I had to look at things that I did not want to look at.
'I spent hours going through it. He would post pictures of himself - not his face, like, his body and his parts.'
In one image posted by the user, he looked to be in a child's bedroom - and Kat decided to inspect the background.
'So I saw stuffed animals and I was like, all right, let me see,' she continued, deciding to compare the furniture with that featured on his little sister's TikTok profile - and horrifically, it matched up.
Cecilia described Patrick as 'very talkative' and 'easy to talk to' when she was in high school.
'When I was a sophomore, I had a lot of classes with him. I had almost all of my day with him.
'And we were friends; we would talk. Until a month or two into the school year, he blocked me on every social media that we had together and stopped talking to me in class.
'And he reached out to me and said: I don't hate you, by the way. He said, I would sooner restrict you from all formats of contacting me. I'm extremely attracted to you, so before that becomes an inevitable problem or upset for me, I might as well stop myself from even trying. Does that explanation suffice for you?
'And I said, I guess it does. And that was that.'
She admitted her soul felt 'broken in half' because of the betrayal of trust from someone she considered a friend.
'You watched me kind of grow up, you know, you spent most of my teenage years with me,' she said.
'So you're telling me that all of that time that you spent with me, watching me kind of become the person that I am, was enjoyment to you because you were watching me turn into rape meat.'
Eventually, the evidence found its way to Nassau County Police Department.
Patrick, however, only got six months in jail - with 10 years probation.
In the end, it was not his campaign of terror that allowed the prosecution to build a case against him - but a real nude photo of one of the victims - 'a real picture that had been taken by an ex-boyfriend, who had then shared it among classmates, including Patrick'.
It was considered not only revenge porn - but because the victims was just 14 - child sexual abuse material.
He pleaded guilty in 2022.
Speaking to the podcast, Detective Timothy Ingram said that while the victims were incredible in making their case, when it came to charges it was more complicated.
'These girls, they did their own investigating and they did excellent work,' he expressed. 'They would make great detectives.
'A lot of my co-workers hadn't seen anything like this, where it was to this level of, you know, him posting everywhere for so long with so many victims.'
More than 40 women in Levittown were reportedly affected - but the site's reach was far broader.
The podcast also spoke to a victim of the same website in New Zealand - where, in a horrifyingly similar twist, the perpetrator was known to the woman.
'I was under the age of 18, so I could not have a Tinder. But some guys that I knew, who were maybe one or two years older than me, sent me screenshots of a Tinder account with all of my photos on it and were asking if it was me,' a woman known as 'Lucy' shared.
'When this person matched with a boy on Tinder, they would send them a Snapchat account - and through that Snapchat account would send like very explicit, nude and sexual content but without a face.'
These were not altered images of Lucy, but rather, were made to appear as if she was exchanging faceless nudes.
'In 2020, I got a very unexpected email from the New Zealand police asking to have a phone call with me,' she recalled.
'And I was so suspicious at this point of online harassment and the ability to fake anything online that my initial response was that this was my harasser pretending to be a police officer trying to get in touch with me.'
However she soon realized a 'wider case' was happening 'revolving around another woman who was being harassed in a similar way'.
'They had reverse engineered some evidence that led them to me.'
Investigators Will Wallace and Doug Nuku found some 12 other victims in the country.
While the perpetrator was skilled at hiding, he was eventually discovered as the suspect was followed around on various sites for months - including Facebook Marketplace, where he posted an electric guitar for sale, which was laid on out on a floral comforter.
The innocuous item matched with a photo he had shared on another pornographic site, and soon enough they tracked down the assailant - Finn Cottam.
'I remember the distinct sensation of time slowing down and then they just said a name. And I don't know what I expected, but they said a name that was very familiar to me. It was the name of a boy that I went to school with when I was 12,' Lucy said.
'I felt such a wave of relief that it was a person. Because I guess when someone lives in anonymity, you think they're a monster. And when you hear a name, they're suddenly just a person.'
In 2024, Cottam was sentenced to seven years in jail. As reported by the New Zealand Herald, when he was tracked down, Cottam was found with 'more than 8000 objectionable images and videos, including child exploitation material, on multiple devices he owned'.
He was imprisoned for what the court described as a 'sustained campaign of sexual terror'.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Leader Live
16 hours ago
- Leader Live
AI scams awareness campaign launched in North Wales
The campaign, led by the charity Get Safe Online in collaboration with North Wales Police and Crime Commissioner (PCC) Andy Dunbobbin, aims to help residents use AI safely and confidently this summer. Get Safe Online is a service commissioned by the PCC's office and the police force to provide digital safety information to the public. Mr Dunbobbin, PCC for North Wales, said: "As Police and Crime Commissioner, fighting cybercrime is one of my key priorities and AI is one of the biggest digital and technological innovations of recent years. "It has the power to transform our lives, often for the better. "But with every innovation, there is always a criminal who will try and use it for their own ends, whether that be through fraud, theft, or deception. "As well as using these new technologies, the important thing is for people to educate themselves about the dangers that might be lurking in the shadows. "As the old saying goes, forewarned is forearmed. "That's why I encourage people to follow this new advice from Get Safe Online and stay safe while using the internet and information technology." AI technology now underpins many everyday tools, from virtual assistants to online shopping and entertainment recommendations. While these systems offer convenience, they also present fresh opportunities for cybercriminals. The campaign warns that AI can be used to create convincing scams and other forms of digital deception. To help the public stay safe online, Get Safe Online is sharing practical advice for identifying and avoiding AI-enabled scams. The organisation recommends: Checking the context: Be wary of unsolicited emails, messages, or phone calls, even if they appear professional. If the message seems urgent or too good to be true, it could be a scam. Inspecting the details: AI-generated content may be grammatically correct but could include subtle errors, such as odd email addresses, incorrect logos, or unusual phrasing. In images and videos, look for signs that something is not quite right. Verifying identity independently: Do not trust a message alone. Use a known, trusted method to contact the person or organisation and confirm their identity. Get Safe Online also offers more general guidance for using AI tools responsibly. The charity recommends using AI as an aid rather than a replacement for critical thinking. Users should review and refine content generated by AI, and confirm information using reliable sources. Personal and financial information should not be entered into AI tools, as there is a risk that it could be exposed to others through generative AI or search platforms. Staying informed about AI developments and new scam techniques is also encouraged. Special Constable Dwain Barnes from North Wales Police's Cybercrime Team said: "Although Generative AI has the potential to improve many aspects of society, it can also be used by criminals to author convincing phishing emails, create disinformation for social media posts or generate deepfake images and videos that look realistic, making them very difficult to spot. "AI can also clone a person's voice from a few seconds of audio. "Scammers can therefore use AI to impersonate trusted individuals and trick people into transferring money or revealing sensitive information, for example. "It is therefore more important than ever to double check information to ensure that it is from a trusted source and if you receive unexpected requests or messages which might seem urgent or emotional, take your time and verify that they are genuine by contacting the sender directly using a verified means of contact, not by replying to the message or calling the number back." Mr Barnes also offered additional security tips for the public. READ MORE: Plans submitted to build new Home Bargains store in Flintshire He said: "To help you stay safe, use strong long passwords using three random words, turn on two-step verification for all your accounts and don't share those codes with anyone else. "Be mindful about what you are posting online, scammers can download your content and use it to create deepfakes, so it's advisable to have strong privacy settings on your social media accounts. "Also consider agreeing on a secret word or phrase with your family or team members, you can then use this to confirm that it really is them if something doesn't feel right. "Let's keep spreading the word on how scammers are using AI, it's important for more people to understand how AI deepfakes work, which will make it harder for scammers to succeed." Further information and digital safety advice can be found at


Times
19 hours ago
- Times
Precrime profiling is no longer a fantasy
This week the UK government introduced an 'artificial intelligence violence predictor' into the prison system, a tool to analyse factors such as criminal record, age and behaviour, to calculate which inmates are most likely to resort to violence so officers can intervene before they do. With attacks on prison officers increasing, AI profiling of inmates is the latest example of so-called precrime technology, based on the dubious theory that science can foresee individual criminal behaviour and prevent it by disrupting, punishing or restricting potential law-breakers. The idea was popularised in the 1956 Philip K Dick novel The Minority Report, adapted by Steven Spielberg into a 2002 movie starring Tom Cruise, in which teams of psychic 'precogs' exercise foreknowledge of criminal activity, including premeditated murder, to identify and eliminate persons who will commit crimes in the future. • Prisons get 'Minority Report' AI profiling to avert violence In the film, set in 2054, the chief of the Precrime agency explains the advantages of pre-emptive justice: 'In our society we have no major crimes … but we do have a detention camp full of would-be criminals.' Thirty years ahead of schedule, instead of clairvoyance as a crime prevention tool, we have AI. The theory of precrime dates to the early 19th century and the Italian eugenicist Cesare Lombroso, who is purported to have invented the term 'criminology'. Lombroso believed that criminals were born lawless, inheriting atavistically villainous characteristics and physiognomies. Criminal anthropometry, the precise measurement of faces and bodies, he argued, could be used to identify crooks and stop them from committing crimes. This 'positivist' school of criminology claimed to recognise criminals not only by biological characteristics but also through psychological and sociological forms of behaviour. 'Born criminals', nature's psychopaths and dangerous habitual offenders, could thus be eliminated using capital punishment, indefinite confinement or castration. The sinister notion that a system might detect the mere intention to offend is echoed in the 'thought crime' of George Orwell's 1984. Richard Nixon's psychiatrist, Arnold Hutschnecker, advised the president to run mass tests for 'pre-delinquency' and confine those juveniles to 'camps'. A refugee from Nazi Germany, Hutschnecker insisted these would not be concentration camps but holiday camps in a 'pastoral setting'. In the 1970s, the University of California, Los Angeles attempted to set up a Centre for the Long-Term Study of Life-Threatening Behaviour, using scientific data to predict 'dangerousness'. It planned to 'compile stocks of behavioural data to understand crimes that had not yet occurred but were 'in formation'.' The project foundered when it was suggested the centre intended to use 'psychosurgery' to modify behaviour. • Conned by the Tinder Swindler: how his victims took revenge But precrime is not some sci-fi fantasy or a wacko theory from the fringes of eugenics; it is already here. 'Predictive policing' — using data to forecast future criminal activity — is expanding rapidly. The UK Ministry of Justice is said to be developing a 'homicide prediction project' using police and government data to profile individuals with the aim of forecasting who is more likely to commit a murder. The project, revealed in April by the investigative group Statewatch, will 'review offender characteristics that increase the risk' and 'explore alternative and innovative data science techniques to risk assessment of homicide'. In the US, the software system Compas (Correctional Offender Management Profiling for Alternative Sanctions) is used by police and judges to forecast the risk of recidivism among more than one million offenders. The software predicts the likelihood that a convicted criminal will reoffend within two years based on data that include 137 of each individual's distinguishing features as well as criminal or court records. This is where actuarial science (mathematical and statistical methods used to assess risk in insurance, pensions and medicine) meets crimefighting and sentencing guidelines: a technological tool to predict the risk of reoffending by rating factors such as type of crime, age, educational background and ethnicity of the offender. In Chicago, an algorithm has been created to predict potential involvement with violent crime to draw up a strategic subject list — or 'heat list' — of those the algorithm calculates to be the city's most dangerous inhabitants. Precrime is most obvious and advanced in the context of counterterrorism to identify threatening individuals, groups or areas, but inevitably invites conflict between the ideal of impartial criminal justice and the needs of national security. In the traditional justice and criminal system, the law attempts to capture and punish those responsible after crimes have been committed. AI could invert that equation by meting out punishment or imposing surveillance where no crime has been committed — yet. As the chief of the Precrime agency in Minority Report observes: 'We're taking in individuals who have broken no law.' Critics fear that precrime techniques could remove the presumption of innocence, the cornerstone of the justice system, and increase guilt by association since an individual's known contacts would influence any risk assessment. It also threatens to dehumanise individuals by reducing people to the sum of their accumulated data. Latter-day predictive policing already deploys data analysis and algorithms to identify higher risks of criminality, triggering increased police presence in certain areas and communities. Critics argue that this leads to increased racial profiling, with certain populations disproportionately flagged as high risk. If the data pool being 'learnt' by AI is already racially biased, then its predictions will be similarly skewed. Until the digital age, crimefighting was based on solving crimes or catching criminals in the act. In the age of AI, the sleuth will rely on machine learning to uncover clues to crimes that have yet to be perpetrated. 'It is a capital mistake to theorise before one has data,' said Sherlock Holmes. In the brave new world of precrime, the data will take over from the detectives.


Spectator
a day ago
- Spectator
Can AI prevent prison violence?
The government desperately needs to save the justice system, and it believes that technology might be part of the solution. The Ministry of Justice has announced that it will be using AI to 'stop prison violence before it happens'. The need is urgent. There were over 30,000 assaults in prisons during the 12 months to the end of March 2025, a 9 per cent increase on the previous year. This is now Labour's problem. As Andrew Neilson, Director of Campaigns at the Howard League said yesterday, 'these statistics cover most of the government's first year in power. While action is being taken to reduce pressure on the prison population and stabilise regimes, far more must be done, and urgently, to save lives and ensure prisons work to cut crime, rather than create it.' So how will AI help? Describing a system more like that seen in Minority Report than our crumbling Victorian jails, the government says AI will 'identify dangerous prisoners and bring them under tight supervision'. AI will analyse data on individual prisoners, including their offending history and behaviour in custody in order to allow staff to prevent violence before it happens. Technology has already been deployed to rapidly scan prisoners' seized mobile phones to produce intelligence on crime within jails, including the drugs trade. This is very timely, as this week the government has announced that drone incidents over our prisons are up by 43 per cent. That's the idea anyway. The reality of course is that this will all rely on the data provided by prison staff, which is often of very low quality. This week I attended the ongoing inquest into the death of Rajwinder Singh, a man who died at Wandsworth in 2023. During testimony I heard on Wednesday, it became apparent that the contact logs were not reflective of the visits to Rajwinder's cell shown by the prison's CCTV. This is an extreme example, but anyone who has spent much time in prisons knows that they are often chaotic, badly-organised environments which rely on a huge amount of paperwork. If AI is fed garbage, it will be worse than useless. The government has great hopes for technology in prisons, something which I know is driven by Lord Timpson's personal enthusiasm for it. There are already some excellent examples of Large language models (LLMs) being deployed across the justice system. Probation have been piloting three different systems which take audio recordings of meetings between offenders and probation officers and produce transcripts, saving many hours of work. Even within the probation profession there are doubts about this. Tania Bassett, National Official of NAPO, the probation union, told me that there were concerns about whether LLMs could cope with some regional accents (with Geordie identified as being particularly challenging), and that they are 'approaching it with caution because of the MoJ's history of being bad with technology, and we are concerned that this doesn't become an excuse to replace people and relationships.' All this investment in technology will come at a high price, something NAPO are also concerned about, particularly as they are currently balloting for industrial action. Bassett said 'they're spending all this money on technology but we're in a strike ballot for pay – we're concerned that this £700 million for probation will end up being squandered on technology which doesn't solve the underlying problems' Broadly, this kind of investment in technology is a good thing. The justice system in general, and our prison system in particular, are incredibly backward, with a huge amount of staff time spent manually completing forms and documents. If technology can free staff up to spend time working with inmates, engaging in purposeful activity and making prison actually work, then it could be a huge benefit. There are likely to be challenges though. In particular, if there are perceived racial inequalities in who the systems identify as being likely to commit violence in jail, it is possible that legal challenges may be forthcoming. In the end, a safer prison system will benefit staff, inmates and the public. Jails which are awash with violence can do almost nothing to help people reform. Most prisoners do not want violence on their 'landings'. It might not quite be Minority Report, but if this halts the rising tide of violence in our prisons it will absolutely be worth it.