.jpeg%3Ftrim%3D0%2C0%2C0%2C0%26width%3D1200%26height%3D800%26crop%3D1200%3A800&w=3840&q=100)
‘Drunk' easyJet pilot ‘seen stripping off' at hotel bar
The captain was scheduled to operate a flight from the West African island nation back to the United Kingdom the day after the incident.
According to The Sun, the pilot arrived at the five-star Melia Dunas Beach Resort and Spa on 4 August.
He reportedly engaged in a drinking session at a bar that extended into the early hours of 5 August.
It was at 2.30am that he was 'seen stripping off' and proceeded to walk through the reception area before entering the gym and spa, the newspaper claimed.
The Sun did not name the pilot in its reporting.
It claimed that 'stunned' easyJet passengers recognised the 'paralytic' captain and raised the alarm.
He had been due to operate a flight back to Gatwick approximately 36 hours later, on the afternoon of 6 August, but easyJet confirmed a replacement pilot was found.
An easyJet spokesman said: 'As soon as we were made aware, the pilot was immediately stood down from duty, in line with our procedures, pending an investigation.
'The safety of our passengers and crew is easyJet's highest priority.'
EasyJet's flights from Gatwick to Cape Verde are its longest from England, at 2,332 nautical miles.
The flights are scheduled to last about six hours.
Flight crew drinking incidents
In 2018, armed police arrested a British Airways pilot just before takeoff at Gatwick after the cabin crew raised fears he may have been drunk.
Attendants on the flight, which was bound for Mauritius, called 999 after allegedly smelling alcohol on one of the officers' breath.
In 2019, a Delta pilot was arrested on suspicion of being drunk while onboard a full plane ready to take off.
He was arrested at Minneapolis-St. Paul International Airport when officers smelt alcohol on his breath.
And in 2023, a British Airways flight attendant was arrested 'on suspicion of performing an aviation function when over the prescribed limit for alcohol'.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Sun
8 hours ago
- The Sun
Mystery as bodies of FOUR men are pulled from Seine in Paris after train passenger spotted floating corpse
FOUR people have been found dead in the Seine River in Paris after a horrified train passenger spotted a floating corpse. Police rushed to the harrowing scene before recovering the bodies from the river in Choisy-le-Roi, Val-de-Marne, in the French capital on Wednesday. 1 Cops were joined by river brigade authorities who helped to bring the bodies out of the water. is your go-to destination for the best celebrity news, real-life stories, jaw-dropping pictures and must-see video. Like us on Facebook at and follow us from our main Twitter account at @TheSun.


The Sun
9 hours ago
- The Sun
My son, 16, killed himself over terrifyingly realistic deepfake… as sick ‘nudifying' apps sweep YOUR child's classroom
LOOKING out of the window of her school bus, a 13-year-old girl is distracted by the laughs of the two boys sitting in front of her as they flick through pictures on a phone. She peers over the backs of the lads' seats - only to recoil with horror at the source of their amusement: a photo of her walking into their school canteen, completely naked. 11 11 11 The image is, quite literally, the stuff of nightmares. Yet while it looks no less real than any other photograph, it is actually a deepfake image created by a 'nudifying' app, which has stripped the girl's school uniform from her body. A Sun investigation can reveal this disturbingly realistic artificial intelligence (AI)-based technology - which is used by millions and shockingly accessible on major social media sites - is sweeping British schools, putting children at risk of bullying, blackmail, and even suicide. While innocent youngsters are being 'nudified', or having their faces realistically planted onto naked bodies, teachers are being digitally 'stripped' by their students as crude 'banter'. Outside of the school playground, there are further disturbing problems, with data by the UK charity Internet Watch Foundation (IWF) revealing that reports of AI-generated child sexual abuse imagery have quadrupled in a year. In the encrypted depths of the internet, perverts are sharing sick AI 'paedophile manuals', detailing how to use artificial intelligence tools to create child sexual abuse material (CSAM). Some sickos are even creating deepfake nudes of schoolchildren to coerce them into forking over large sums of money - and for some victims, the consequences can be deadly. This February, 16-year-old schoolboy Elijah Heacock took his own life in the US after being blackmailed for more than £2,000 over an AI-generated naked picture of himself. 'I remember seeing the picture, and I thought, 'What is that? That's weird. That's like a picture of my child, but that's not my child',' grieving mum Shannon tells The Sun. The photo of Elijah - a music lover who, at 14, had started volunteering to feed the homeless in Kentucky with his twin sister, Palin - was discovered on his phone after his death. It was terrifyingly realistic - yet Shannon immediately saw signs it was fake. 'The District Attorney was like, 'No, it's a real photo',' continues the mum, who had never before heard of ' sextortion ' - where criminals blackmail a victim over sexual material. 'And we were like, 'No, it's not.' The photo almost looked like somebody sitting in a cloud.' She adds: 'He had abs. Eli did not have abs - bless his heart, he thought he did." Millions using sick apps As parents like Shannon have learned at a tragic cost, deepfakes are often so realistic-looking that experts can't tell they are AI-generated. On popular messaging apps, kids who wouldn't dream of law-breaking on Britain's streets are sharing fake nudes of their teen crushes - unaware it's illegal. British law prohibits the creation, or sharing, of indecent images of children, even if they are artificially made. Teens who do so for a 'laugh' face up to 10 years behind bars. In spite of this, 'nudifying' apps and websites that are being accessed by millions of people every month are being advertised on social media, listed by Google, and discussed avidly on the dark web. Our investigation found one website encouraging users to 'undress' celebrities - clearly, without their consent - with its 'gem'-based prices starting at £14 per bundle. 'Our tool can undress anyone in seconds,' boasts the site. Another, offering one free 'picture undress' per day, tells users they can strip a photo of 'a desired person'. And a third brags: 'Let our advanced AI bring your fantasies to life.' 11 Reviews on such sites paint - if possible - an even more horrific image. 'I can create secret images of the woman I like,' wrote the user, in his 40s, of another site. 'The sense of guilt is irresistible." Schools in crisis For schools, the rise of nudifying apps has provided a near-existential challenge. Experts warn senior staff are desperate to solve issues internally to avoid reputational damage, while teachers face career-threatening problems when fake photos of them are shared. 'The challenges of technology that nudifies photos or creates deepfake nude images is a problem most secondary schools and colleges around the country are now grappling with,' says top UK criminal defence lawyer Marcus Johnstone, who specialises in sex crime. 'I'll bet it's a live issue in every classroom.' Marcus, managing director of Cheshire-based PCD Solicitors, adds that he is seeing "an ever-increasing number of children being accused of crimes because of this 'nudify' technology". 'The schools don't want this information coming out,' he claims. 'The last thing they want is to have their school in the local press having a problem with lads at the school 'nudifying' girls and it's going around the school, around the internet. 'Parents of prospective children going there would go crackers. 'They'd say, 'Well, I'm not sending my kid there'.' Safeguarding expert Kate Flounders tells us: 'The impact [of deepfake nudes] is enormous. For staff, it can be career-ending, even if the image is found to have been AI-generated.' Calls for crackdown In April, Dame Rachel de Souza, the Children's Commissioner for England, called on the UK Government to ban apps that use AI to produce sexually explicit images of children. Her comments came as the IWF's analysts, who have special permission to hunt down and remove repulsive CSAM online, confirmed 245 reports in 2024 - a staggering 380 per cent increase on 2023. Schoolchildren now fear that 'anyone' could 'use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps,' said Dame Rachel. British teens who have fallen victim to such technology have been calling the NSPCC's Childline counselling service, with one girl revealing she has been left with severe anxiety. The 14-year-old said boys at her school had created ' fake pornography ' of her and other girls. They'd then sent the explicit content to 'loads' of group chats. 'They were excluded for a bit, and we had a big assembly about why it was wrong, but after that the school told us to forget what happened,' the traumatised girl told Childline. 'I can't forget, though. 'People think that they saw me naked, and I have to see these boys every day.' Nearly six months on from Elijah's death, Shannon, a cheerleading coach, is struggling to deal with the loss of her beloved son, who is believed to have been targeted in a 'sextortion' scam by a man in Nigeria. 'We're not doing very well right now," she admits, adding jokingly: 'Elijah was an amazing brother who drove everyone insane." She continues: 'He was our tornado. Our house is so quiet and it's sad.' Shannon is calling for parents to chat to their kids about AI technology, with many mums and dads clueless about the explicit apps infiltrating their children's classrooms. 'Talk to your kids, and read about it,' she urges. 'Our children are in a war that we're not invited to.' 11 Sinister creeps Kate, CEO of the Safeguarding Association, has encountered UK-based cases where the images of schoolchildren and teachers were altered using 'freely available' apps. 'The issue is, once the image is out there, it is nigh on impossible to get it off,' she warns. 'I am aware of one case where a female student was subject to this, managed to have the image removed, only for it to resurface several years later when she was in college. 'The trauma was enormous for her.' 11 Of course, some children, and adults, create such content for more sinister reasons. 'Nudifying' services - many, with brazen terms like 'porn', 'undress', 'X' and 'AI' in their names - have been promoted in thousands of adverts on leading social media platforms. 'These apps are far too easy to access and exploit,' says Rani Govender, Policy Manager for Child Online Safety at the NSPCC, which also wants such apps to be banned. In June, Meta - the tech giant behind Facebook, Instagram and WhatsApp - announced it was suing the maker of CrushAI, an app that can create sexually explicit deepfakes. How sick 'nudifying' apps work THE technology behind 'nudifying' apps - used by children across Britain - is trained on 'massive datasets of real explicit imagery', explains AI consulting expert Wyatt Mayham. 'These 'nudifying' apps primarily use generative AI models like GANs (Generative Adversarial Networks) or newer, more sophisticated diffusion models,' he tells The Sun. 'The AI learns the patterns and textures of the human body, allowing it to 'inpaint' (fill in) or 'outpaint' (extend) a provided image, effectively stripping the clothing from a photo of a fully-clothed person and generating a realistic nude depiction.' Referring to the rise in AI-generated CSAM among UK schoolchildren, Wyatt, CEO of Northwest AI Consulting, adds: 'The danger goes far beyond a 'prank'. 'This is a new form of scalable, psychological abuse. 'For perpetrators, it's a low-risk, high-impact weapon for bullying, revenge, and control. 'More sinisterly, it's a powerful tool for 'sextortion'. 'A perpetrator can generate a realistic fake nude of a victim and then use it as leverage to extort money or, more commonly, to coerce the victim into providing real explicit images.' Jurgita Lapienytė, Editor-in-Chief at Cybernews, warns that AI tools are 'advancing quickly'. She tells us: 'Most of the apps are hard to stop because they use anonymous hosting and payments, often outside the UK. 'Social media giants and tech companies are not moving fast enough to block or report these tools, and current content monitoring often fails to catch them before damage is done.' Meta alleged the firm had attempted to 'circumvent Meta's ad review process and continue placing' adverts for CrushAI after they were repeatedly removed for violating its rules. The giant added it was taking further steps to 'clamp down' on 'nudifying' apps, including creating new detection technology and sharing information with other tech firms. But experts warn that adverts for such apps - often hosted anonymously and offshore - will only continue to pop up on a plethora of social networks as technology outpaces the law. Many of these adverts disguise their offerings as 'harmless' photo editor apps. Others, however, are more forthcoming. Our investigation found a sponsored advert, launched on Meta's platforms a day earlier, for an AI-based photo app that boasted: 'Undress reality… AI has never been this naughty.' Meta has since removed the ad. And on Google, we were able to access 'nudifying' tools at the click of a button. One search alone, made from a UK address, brought up two of these tools on the first page of results. Former FBI cyberspy-hunter Eric O'Neill tells us: 'AI-generated explicit content is widely traded on the dark web, but the real threat has moved into the light. 'These 'nudify' apps are being advertised on mainstream platforms - right where kids are. 'Today's teens don't need to navigate the dark web. 'With a few taps on their phone, they can generate and share explicit deepfakes instantly.' The process - which can destroy victims' lives - takes 'seconds', says Eric, now a cybersecurity expert and author of the upcoming book, Spies, Lies, and Cybercrime. He continues: 'A single photo - say, from a school yearbook or social post - can be fed into one of dozens of freely available apps to produce a hyper-realistic explicit image.' Legal loophole Although most 'nudifying' tools contain disclaimers or warnings prohibiting their misuse, experts say these do little to prevent users from acting maliciously. Deepfake nudes shared online among teens are at risk of being sold on the dark web - where predators prowl chat forums for 'AI lovelies' and 'child girlies'. A report published last year by UK-based Anglia Ruskin University's International Policing and Public Protection Research Institute (IPPPRI) reveals the horrors of such forums. One user chillingly wrote: 'My aim is to create a schoolgirl set where she slowly strips.' How to get help EVERY 90 minutes in the UK a life is lost to suicide It doesn't discriminate, touching the lives of people in every corner of society – from the homeless and unemployed to builders and doctors, reality stars and footballers. It's the biggest killer of people under the age of 35, more deadly than cancer and car crashes. And men are three times more likely to take their own life than women. Yet it's rarely spoken of, a taboo that threatens to continue its deadly rampage unless we all stop and take notice, now. If you, or anyone you know, needs help dealing with mental health problems, the following organisations provide support: Another dreamed of a 'paedo version of Sims combined with real AI conversational and interactive capabilities', while others called the vile creators of AI-generated CSAM 'artists'. Shockingly, perverts can now even digitally 'stitch' children's faces onto existing video content - including real footage of youngsters being previously sexually abused. In many cases, girls are the target. 'At the NSPCC, we know that girls are disproportionately targeted, reflecting a wider culture of misogyny - on and offline - that must urgently be tackled,' says Rani. 'Young girls are reaching out to Childline in distress after seeing AI-generated sexual abuse images created in their likeness, and the emotional impact can be devastating.' 'Digital assault' Earlier this year, the UK Government announced plans to criminalise the creation - not just the sharing - of sexually explicit deepfakes, which experts have praised as a 'critical step'. The change in law will apply to images of adults, with child imagery already covered. The Government will also create new offences for taking intimate images without consent, and the installation of equipment with the intent to commit these offences. A Google spokesperson told The Sun: 'While search engines provide access to the open web, we've launched and continue to develop ranking protections that limit the visibility of harmful, non-consensual explicit content. 'Specifically, these systems restrict the reach of abhorrent material like CSAM and content that exploits minors, and are effective against synthetic CSAM imagery as well.' Meta said it has strict rules against content depicting nudity or sexual activity - even if AI-generated - with users able to report violations of their privacy in imagery or videos. It also does not allow the promotion of 'nudify' services.


The Independent
11 hours ago
- The Independent
US influencer claims innocence over charges laid for Antarctica plane landing
An American pilot and influencer who has been trapped in Antarctica for several weeks maintains that he is 'innocent' of the accusations against him. Ethan Guo was charged by Chilean authorities on June 29 with handing false information to ground control and landing without authorization, but on Monday a judge dropped the charges as part of an agreement with his lawyers and Chile 's prosecutors. It requires the teen to give a $30,000 donation to a children's cancer foundation within 30 days to avoid a trial, leave the country as soon as conditions allow and is prohibited from reentering Chilean territory for three years. According to Guo's defense, the teen pilot was granted authorizations to deviate from his initial route — from Punta Arenas, southern Chile, to Ushuaia, Argentina — and land at Teniente Marsh base in Chilean Antarctica due to "weather and technical circumstances." 'My client's actions are protected by a presumption of legality arising from the authorizations expressly granted by various DGAC ( Directorate General of Civil Aviation) officials,' his lawyer, Jaime Barrientos, said in documents handed to the court. According to Barrientos, evidence was presented that 'Mr. Guo informed the DGAC as soon as possible of the change to the filed flight plan, receiving express authorization to land at said aerodrome." Guo, who turned 20 during his stay in Antarctica in July and has maintained his innocence, said in a statement that during his original journey, he 'encountered instrument failures and heavy, unreported icing conditions' which created 'an imminent risk of a crash.' 'Due to these cascading failures, Mr. Guo requested and received explicit, direct permission to land at the Marsh base from a high-ranking DGAC official via WhatsApp, an authorization that was subsequently confirmed by the base's air traffic controller," it said. The influencer added that the court's ruling last Monday was 'a direct result of the prosecutor's refusal to acknowledge this clear evidence.' The prosecutor's office has maintained in several interviews with local media that Guo has handed ' false information' to the respective authority and, by doing so, put at risk 'the safety of global air traffic.' 'What the background indicates is that he always had the will and the knowledge that he wanted to reach Antarctica at all costs, putting at risk not only his life, but also the safety of global air traffic,' prosecutor Cristián Crisosto told local Radio Bio Bio in an interview on Wednesday. Guo made headlines last year when he began a trip in an attempt to become the youngest person to fly solo to all seven continents and at the same time collect donations for research into childhood cancer. But for the past six weeks, he has stayed at the Chilean Air Force base where he landed in June. He was not forced to stay there, only to remain in Chilean territory, but because of the severe winter in that part of the southern hemisphere, no flights were available. He has also been unable to fly his small plane, whose future remains uncertain. Crisosto said that the plane would probably have difficulty leaving Antarctica because it does not meet the necessary regulations. 'That plane could leave Antarctica in pieces. But I don't see it flying,' he warned.