
I'm fuming a ‘predator' adult woman chatted up my child on our family holiday – even AFTER I told her his age
A MUM has been left fuming after a 33-year-old "predator" tried to chat up her 16-year-old son on holiday.
Carmel
3
Carmel was left "raging" after catching a 33-year-old "predator" trying to chat up her 16-year-old son
Credit: tiktok.com/@carmeljade1
3
The woman's attempt to get the teenager's Snapchat details came AFTER Carmel told her he was only 16
Credit: tiktok.com/@carmeljade1
The woman had initially started speaking to Carmel, asking if her son has had his teeth done - to which she replied: "No, he's 16 - they're natural".
The woman then replied "Wow, they're lovely".
A few minutes later, Carmel went to the bar and left her son sitting with his grandad, at which point the "33-year-old woman turned around to my 16 year old son and said, 'Give me your Snapchat'."
Carmel added that the woman - who she said looked "older than 33" - had "slyly" waited until she'd walked off so that she could approach her son.
Read more Parenting stories
"You knew it were wrong," she continued.
"You knew exactly what you were doing, predator, didn't you?"
When Carmel discovered what had happened, she went up to the woman - who was on a FaceTime call at the time - and demanded to know why she had asked her son for his Snapchat.
And when confronted, the woman admitted: "I knew it were wrong, and I'm sorry".
Most read in Fabulous
"What were you gonna do? Have your wicked way and buy him a Big Mac?" Carmel raged.
She concluded her video by saying she hoped the woman saw her TikTok, because they're both still in the resort and Carmel's "still not over it".
I make my teenage daughters pay rent to stay in 'apartments' at home, and I'll 'evict' them if they don't cough up
"I'm still raging!" she added in the caption.
And in the comments section, people were quick to have their say - with the majority agreeing with Carmel that it was "totally inappropriate".
"Imagine this was a man to a 16 year old girl," one sighed.
"This is so wrong!"
"My 17 year old has said women my age - in their 40s - have tried chatting him up when he's out," another added.
"Absolutely gross and they are lucky I've never been there!!!!"
"Why are you even having to deal with this s**t?" a third shouted.
"Makes me sick - absolute freak!" someone else agreed.
"I'm 37 and my eldest is 18, and I'd still have gone mental!"
"Totally inappropriate," another said.
"I'm 32 and have Snapchat only to contact my 13 year old niece!
"Otherwise she'd never reply to my messages!"
"Having Snapchat in your 30's is a waving red flag," someone else insisted.
3
Others insisted that the woman even having Snapchat at her age was a "red flag"
Credit: Reuters

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Irish Examiner
3 hours ago
- Irish Examiner
Richard Hogan: 'You don't need to wake up and take an ice bath to increase your mental health'
Tricksters and scam artists have always been among us. The guy standing on a soap box in the 'wild west', proclaiming that he had the cure to whatever ailment was common always gathered a crowd. The dodgy herbalist with a unique collection of samples, proclaiming a panacea for the grim reaper's malintent, invariably found an eager bunch of customers. The outrageous schemes and machinations of these wily characters have never failed to attract public attention. We are drawn to them, to the Iagos and Edmunds of Shakespearian drama. These characters are so confident in their ruse that we almost admire them. They are self-proclaimed shamans, alchemists, false princesses, sorceresses, and political imposters. In equal measure they have intrigued and duped us. Mary Carleton, in 1635, was a crafty woman who posed as a German princess to seduce men in order to relieve them of their wealth. She was a grifter and a bigamist. Alfredo Bowman, better known to the stars as Dr Sebi, became a global phenomenon when he declared that he could cure Aids, cancer, and lupus with his herbal remedies. Bowman had no medical training, despite the 'Dr' honorific. A civil suit taken against him by the New York assistant attorney general prohibited him from making therapeutic claims for his products. Michael Jackson and John Travolta were among his famous clients. So, we have always been intrigued by the big personality, selling the big lie. We love the quick fix. The idea? Someone, somehow has the answer that medical science doesn't. Even though our judgement knows better, it's the chance, the just maybe, that there's truth in a claim that appeals to the gambler in us all, that loves the idea of the grift. We have never been more exposed to these charlatans than in modern times. Social media, has, quite literally, brought them into our bedrooms, and right into the view of our children. A recent investigation by the Guardian newspaper found that half of all the top trending videos that offer mental health advice on TikTok contain misinformation. Shock horror. We live in the age of instantaneous information and abbreviated communication. People want the quick fix, and they want it now! Social media influencers know this and so they sell ridiculous ideas of how to achieve great mental strength and success in life. I talk in schools a lot, and I often show the students different videos from different, self-proclaimed mental health advocates. One, in particular, always gets a good laugh. The guy is talking about how he is successful, and he explains that he breaks his day down in to three, six-hour 'days'. So he has three days, where the rest of us mere mortals are foolishly living one day. He goes on to state that, 'I'm not crazy, you're crazy thinking it takes 24 hours, just like some dude in a cave did 300 hundred years ago'. This is the point where I pause the video and ask the students, 'Anything problematic about this statement?' A guffaw of laughter generally ensues, because they get the point. This guy thinks we were all living in caves 300 hundred years ago, which means Shakespeare wrote Hamlet in a cave 425 years ago. Not too shabby. I always end by saying that perhaps we should not be taking advice from someone who doesn't have any concept of humans' journey on this planet. I also say that if you did live like this guy suggests, not only wouldn't you be successful, you'd probably end up in a psychiatric unit from a mental break down. We need rest. We need to replenish the wellspring of our imagination and cognition. These mental health advocates, putting themselves forward as healers and experts, have little to no understanding of psychology or health. I hear them talking about resilience like it can be bought, or an easy fix in a child. True resilience comes from a child's sense of their community, their place in it, and their connection to others. You can't monetise that advice, and it takes time. The carbohydrate in the noughties got all the bad press; now, idleness is being presented as the antithesis to success. Rubbish. We must rest, but, once again, you can't monetise that advice, because anyone can rest and anyone can advocate it. But drinking turmeric upside down while chanting some Buddhist phrase, now that I can get behind. My advice is always very straight forward and simple. What do you do that helps you feel better? A jog, meeting friends, a sea swim? Do more of that. What do you do when you are feeling low that doesn't help? Scrolling your phone, engaging in an inert lifestyle, and isolating? Do less of that. Simple. You don't need to wake up and take an ice bath, while reading some obscure Roman poet. Just do more of what helps to increase your mental health, and avoid charlatans on your feed who tell you they have quick answers. There are no quick answers. Physical and mental health require intentionality and some nice practices. It is an ongoing process. We all have inexplicable bad days and we all have better days. Figure out what makes the better days, and do more of that.


RTÉ News
7 hours ago
- RTÉ News
All you need to know about voice spoofing and audio deepfakes
Analysis: Biometric fraud like voice spoofing and audio deepfakes are part of broader social engineering attacks by scammers and criminals Voice spoofing involves using digital technology such as artificial intelligence to mimic someone's voice so accurately that it can deceive both humans and automated speaker verification systems. With recent rapid advancements in AI, creating these fake voices—often called "audio deepfakes"—has become alarmingly easy. Today, with just a few seconds of recorded speech from platforms like podcasts, YouTube or TikTok, machine learning models can generate highly realistic synthetic voices that mimic real individuals. It is a type of biometric fraud and often part of broader social engineering attacks. How does voice spoofing work? AI-powered tools analyses the unique patterns of a person's speech—such as tone, pitch, and rhythm—and use this data to produce synthetic speech that closely resembles the original voice. The technology have become so advanced that distinguishing between a real voice and a fake one is increasingly challenging. From RTÉ Radio 1's The Business, BBC's File on 4. reporter Paul Connolly on how criminals are now using AI-generated voices to scam people out of their money Typically, the process usually begins with an attacker collecting voice clips from online sources like social media or videos. Specialized AI models, like VGGish or YAMNet analyze these voice samples to extract important acoustic patterns from the voice, turning them into digital fingerprints called embeddings. These embeddings are then fed into voice generation systems such as Tacotron, WaveNet, or FastSpeech that produce new speech mimicking the original voice. The resulting fake voice can be used in phone calls or apps to impersonate someone in real time. How is this going to impact us in the real world? Financial scams are a growing problem and we've all had a (fairly ridiculous) phone call where a robot voice purporting to be from a company tries to get information or money, but more sophisticated versions have worked. In the UK, fraudsters used AI-generated voices to impersonate financial advisors, leading to a multi-million euro scam targeting crypto investors. In the US, the FBI has warned about scammers using AI to mimic senior US officials' voices, deceiving individuals into sharing confidential information. There have also even been cases where scammers cloned the voices of loved ones, calling individuals and pretending to be in distress to extract money. These incidents highlight the disturbing reality that even the sound of someone's voice can no longer be trusted. From CNN, can Donie O'Sullivan's parents tell the difference between RealDonie's voice and AI-Donie's voice? Celebrities, politicians, and influencers are particularly at risk because their voices are widely available online. The more audio content (voice data) available publicly, the easier it is for AI tools to replicate their voice. This is a basic principle of AI: more data = better performance. However, it's not just public figures who are at risk. If you've ever posted a video or audio clip on platforms like Facebook, Instagram, or YouTube, your voice could potentially be cloned. What are the difficulties in detecting voice spoofing? Detecting synthetic voices is a complex task. Most traditional security systems and speaker verification systems often rely on voice recognition for authentication, but AI-generated voices have become sophisticated enough to deceive these systems. Some of the core technical challenges include: Spectro-temporal similarity Fake voices closely mimic both pitch and timing patterns of natural speech. Data imbalance: Real-world datasets typically contain fewer examples of spoofed voices, making it harder for AI to recognize these cases. Generalisation: Many detection models struggle when faced with spoofing methods they weren't specifically trained on. How to protect yourself While the threat is real, there are steps you can take to safeguard against voice spoofing: Be sceptical: If you receive an unexpected call requesting sensitive information or money, verify the caller's identity through another channel. Use safe words: Establish a unique code word with family and close contacts that can be used to confirm identities during emergencies. Limit voice sharing: Be cautious about sharing voice recordings online, especially on public platforms. Stay informed: Keep abreast of the latest scams and educate yourself on how to recognize potential threats. Voice spoofing poses a growing threat as AI continues to advance, making it easier than ever to mimic someone's voice convincingly. Whether you're a public figure or an everyday social media user, the potential to become a victim of voice cloning exists. From RTÉ Radio 1's Ray D'Arcy Show, AI expert Henry Ajder talks on how deepfakes are damaging online trust and what some platforms are doing to rebuild it How our research work is helping Our recent research proposes an innovative and effective approach for detecting voice spoofing by using a hybrid deep learning (DL) architecture called VGGish-LSTM. We used VGGish, a pre-trained model developed by Google, to extract robust acoustic embeddings from audio data. These embeddings capture detailed features that are often not noticeable by human listeners but are critical in distinguishing synthetic voices. Once extracted, these acoustic features are then analysed by a Long Short-Term Memory (LSTM) network, a type of artificial neural network designed specifically to detect long-term patterns and dependencies in sequential data. These networks excel at identifying variations in speech rhythm, tone, and pitch that could indicate synthetic or manipulated speech. The advice for users is to stay vigilant, limit how much voice data you share online and adopt simple safety practices Evaluated on the widely used ASV Spoof 2019 dataset, our model achieved an outstanding accuracy of over 90%. This performance demonstrates our model's ability to detect spoofing effectively and can be used in real-world scenarios such as banking authentication, call centre security, or smart home voice verification systems. With ongoing research into detection technologies, such as the VGGish-LSTM model described here, we can continue developing robust defences to cope with voice spoofing scams. But for users, the advice is to stay vigilant, limit how much voice data you share online and adopt simple safety practices.


The Irish Sun
a day ago
- The Irish Sun
Rappers Kneecap will play Wembley Arena in biggest UK gig despite band member facing terror charge
RAPPERS Kneecap will play Wembley Arena in their biggest UK show — despite one of the band facing a terror charge. The Irish group hope to sell out the 12,500-capacity venue on September 18. 3 Band member Liam Óg Ó hAnnaidh is charged over the alleged display of the flag of Hezbollah — classed as a terror group by the UK — at a London gig Credit: Reuters 3 The band have sparked controversy with their anti-Israel stance Credit: AFP They also released a poster featuring a Buckingham Palace guard with his uniform painted green, and an Irish tricolour balaclava under his bearskin hat. The band have sparked controversy with their anti-Israel stance. READ MORE ON KNEECAP ROW He is due in court on June 18. Counter-terror cops previously investigated Kneecap after videos emerged allegedly showing the band telling fans: ' In another clip, they appeared to shout ' The group apologised to murdered MPs' families but insisted footage of the incident had been 'exploited and weaponised'. Most read in Music They say they have never supported Hamas or Hezbollah. Tory leader Kneecap perform surprise gig in London hours after rapper Liam O'Hanna, 27, charged with terror offence The Beeb said: 'Decisions will be made in the lead up.' 3 Rappers Kneecap will play Wembley Arena in their biggest UK show — despite one of the band facing a terror charge Credit: Reuters