logo
Police searched a man's laptop for malware. What they found is becoming all too common

Police searched a man's laptop for malware. What they found is becoming all too common

The Age7 hours ago

When police searched the computer of 29-year-old IT worker Aaron Pennesi in March, they were looking for the malware he used to steal personal information from his colleagues at The Forest High School on Sydney's northern beaches.
That wasn't all they found. In an all-too-common turn of events, police stumbled upon child sexual abuse material on a laptop seized for another reason. But something was different about this content.
The scenes depicted weren't real.
Instead, Pennesi had used a popular AI-generation website to create the child abuse material using search prompts that are too grotesque to publish.
In an even more severe case, a Melbourne man was sentenced to 13 months in prison in July last year for offences including using an artificial-intelligence program to produce child abuse images.
Police found the man had used an AI image-generation program and inputted text and images to create 793 realistic images.
As cases involving the commercial generation of AI child abuse material that is completely original and sometimes indistinguishable from the real thing become increasingly common, one expert says the phenomenon has opened a 'vortex of doom' in law enforcement's efforts to stamp out the content online.
Naive misconceptions
As the tug of war over the future of AI oscillates in the court of public opinion, one of the more terrifying realities that suggests it could do more harm than good is the ease with which it enables offenders to produce and possess child sexual abuse material.
The widespread adoption of image-generation models has been a boon for paedophiles seeking to access or profit from the content online.
Interpol's immediate past director of cybercrime, Craig Jones, says the use of AI in child sexual abuse material online has 'skyrocketed' in the past 12 to 18 months.
'Anybody is able to use an online tool [to access child sexual abuse content], and with the advent of AI, those tools are a lot stronger. It allows offenders to do more,' Jones said.
The AFP-led Australian Centre to Counter Child Exploitation, or ACCCE, received 63,547 reports of online child exploitation from July 2024 to April 2025. That's a 30 per cent increase on the previous financial year, with two months remaining.
'We're seeing quite a significant increase in what's occurring online,' AFP Acting Commander Ben Moses says, noting that those statistics don't differentiate between synthetic and real child abuse content.
Loading
That's in line with the legal treatment of the issue; possessing or creating the content in either form is punishable under the same offences.
But a common misconception is that AI-generated material shouldn't be taken as seriously or is not as harmful as the traditional type because no child is abused in the creation of the material.
Moses says that while identifying real victims will always be the ACCCE's priority, AI-generated content is being weaponised against real children.
'It can still be very harmful and horrific. [It] can include the ability … to generate abuse in relation to people they know. For those victims, it has significant consequences.'
In 2024, a British man was jailed for 18 years for turning photographs of real children, some younger than 13, into images to sell to other paedophiles online. The sentencing judge called the images 'chilling'.
In another British example, a BBC report in 2024 found evidence that an adults-only VR sex simulator game was being used to create child models for use in explicit sex scenes, and that models had been based on photos taken of real girls in public places.
'The other aspect of it, and what may not be well known, is cases where innocent images of children have been edited to appear sexually explicit, and those photos are then used to blackmail children into providing other intimate content,' Moses says.
Moses says this new 'abhorrent' form of sextortion, and how it opens up new ways for offenders to victimise minors, is of great concern to the ACCCE.
Professor Michael Salter, the director of Childlight UNSW, the Australasian branch of the Global Child Safety Institute, calls the misconception that AI-generated abuse material is less harmful 'really naive'.
'The forensic evidence says that it is a serious risk to children.'
'The emergence of AI has been something of a vortex of doom in the online Child Protection space.'
Professor Michael Salter
Salter says the demand for synthetic material primarily comes from serious offenders and that, generally, they also possess actual child sexual abuse content.
'It's also important to understand that a lot of the material that they're creating is extremely egregious because they can create whatever they want,' he said.
'The sort of material they're creating is extremely violent, it's extremely sadistic, and it can include imagery of actual children they want to abuse.'
Tech-savvy paedophiles
AI child sexual abuse material first crossed Michael Salter's desk around five years ago. In that time, he's witnessed how offenders adapt to new technologies. As AI advanced, so did the opportunities for paedophiles.
He explains that AI was first used to sharpen older material and later to create new images of existing victims. It has now proliferated into offenders training their own engines or using commercially available image-generation sites to create brand-new material.
This can include deepfake videos featuring real people. But Salter says what is more common is still-image generation that is frighteningly readily available.
'We have commercial image generation sites that you can go to right now, and you don't even have to look for child sexual abuse material because the generation of [it] is so popular that these sites often have trending pages, and I've seen sections where the keyword is 'pre-teen', or 'tween', or 'very young'.'
In a 2024 report, the Internet Watch Foundation (IWF) found a 380 per cent increase in reported cases of AI-generated child sexual abuse content online, noting that the material was becoming 'significantly more realistic' and that perpetrators were finding 'more success generating complex 'hardcore' scenarios' involving penetrative sexual activity, bestiality or sadism.
'One user shared an anonymous webpage containing links to fine-tuned models for 128 different named victims of child sexual abuse.'
Internet Watch Foundation's July 2024 AI child sexual abuse material report
The IWF found evidence that AI models that depict known child abuse victims and famous children were being created and shared online. In some of the most perverse cases, this could include the re-victimisation of 'popular' real-life child abuse victims, with AI models allowing perpetrators to generate new images of an abused minor.
Loading
The report acknowledged that the usage of these fine-tuned models, known as LoRAs, was likely to go much deeper than the IWF could assess, thanks to end-to-end encrypted peer-to-peer networks that were essentially inaccessible.
Moreover, Australia's eSafety Commission warns that child sexual abuse material produced by AI is 'highly scalable'.
'[It requires] little effort to reproduce en masse once a model is capable of generating illegal imagery,' a spokesperson said.
Commercial interests
The rapid escalation of the amount of content available online is partially attributed to how AI has enabled the commercialisation of child sexual abuse material.
'Offenders who are quite adept at creating material are essentially taking orders to produce content, and this material is increasingly realistic,' Salter says.
Jones says that in the span of his career, he's seen the provision of child sexual abuse content go from physical photocopies being shared in small groups to it being available online in a couple of clicks.
'Unfortunately, there is a particular audience for child sexual abuse material, and what AI can do is generate that content, so [commercialisation] is serving a demand that is out there.'
In one of the biggest stings involving an AI-child abuse enterprise, Danish police, in conjunction with Europol, uncovered a subscription service that commercialised access to the content. The global operation saw two Australian men charged, and 23 others apprehended around the world.
'There were over 237 subscribers to that one matter,' Moses says of Operation Cumberland. 'When we talk about proliferation and people profiting from this type of activity, this is of great concern to us.'
Swamped by the growing sea of content, officers now face the difficulty of identifying which situations depict real children being abused, as opposed to an AI-generated child who doesn't exist.
'It also means that police have to spend quite a lot of time looking at material to determine whether it's real or not, which is quite a serious trauma risk for police as well,' Salter says.
Moses from the ACCCE agrees that it's 'very difficult work' for officers. 'Whilst it is very confronting material, it doesn't compare to the trauma that child victims endure, and there's very much a focus on identifying victims.'
The influx of AI-generated content has complicated its mission in many ways, Moses says, including by robbing crucial resources from ACCCE's primary goal of rescuing children who are being abused.
'It takes a lot of time to identify real victims, and the concern for us … is that the [AI-generated content] is becoming increasingly harder [to detect], and it takes time away from our people who are trying to identify real victims.'
Law enforcement 'overwhelmed'
While prosecutions for offences involving fake abuse material have increased, the rate hasn't kept up with the pace of the increase in the amount of content found online.
Salter says resourcing is one of the biggest challenges facing law enforcement.
'Law enforcement is so overwhelmed with really egregious online sexual exploitation cases … their primary priority is to prevent and rescue the abuse of actual kids.'
He says it's a struggle he's heard across all jurisdictions.
'They're really struggling in terms of people power, in terms of access to the technology that they need to conduct these investigations and to store that amount of material,' Salter says.
'There needs to be a huge uplift right across the law enforcement space.'
Additionally, AI-generated child sexual abuse content requires a whole reset of the way the content is detected.
Old machine methods of detecting the content online involved scraping for verified abuse content, which means it has to have already been assessed by a human as illegal content to be detected.
'The obvious challenge we see with AI-generated material is that it's all new, and so it's very unlikely, through current detection technologies, that we can proactively screen it,' Salter says.
Unregulated threat let loose
It's a global issue that crosses jurisdictions and exists on the internet's severely under-regulated new frontier. But that hasn't deterred Australia's eSafety commissioner, Julie Inman Grant, from introducing world-first industry standards to hold tech companies to account for the content they platform.
The standards came into force in December 2024 and require storage services such as Apple's iCloud and Google Drive, messaging services, and online marketplaces that offer generative AI models to prevent their products from being misused to store or distribute child sexual abuse material and pro-terror content.
'We have engaged with both AI purveyors and the platforms and libraries that host them to ensure they are aware of their obligations under the standards,' an eSafety commission spokesperson said.
'We believe the standards are a significant step in regulating unlawful and seriously harmful content and align with our broader efforts to ensure that AI tools, such as those used to create deepfakes, are held to the highest safety standards.'
The recent passage of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 also expanded on the available criminal offences relating to non-consensual, sexually explicit AI-generated material.
While international companies can face multimillion-dollar penalties for breaches of the eSafety Commission's standards in Australia, major tech players such as Meta are increasingly adopting end-to-end encryption, which means even the companies themselves can't see what content they're hosting, let alone law enforcement.
Interpol works at the forefront of these issues, often acting as a bridge between authorities and the private sector. Jones observes that while interventions such as Australia's new standards play an important role in setting high standards for tech companies, encryption and other privacy policies make it 'very hard for law enforcement to get those data sets'.
International co-operation is crucial for successfully prosecuting commercial child sexual abuse content cases, and Jones says that in best practice examples, when a global chain is identified, the tech industry is brought in as part of the investigation.
'I'm seeing more of an involvement in the tech sector around supporting law enforcement. But that's sometimes at odds with encryption and things like that,' Jones says.
Loading
'I think the tech industry has a duty of care to the communities that they serve. So I don't think it's good enough to say, 'Oh, well, it's encrypted. We don't know what's there.' '
Salter takes a more pessimistic view of the tech industry's actions, arguing that most companies are moving away from, not towards, proactively monitoring the presence of child sexual abuse content.
'The emergence of AI has been something of a vortex of doom in the online child protection space,' Salter says.
Online child protection efforts were already overwhelmed, he says, before the tech sector 'created a new threat to children' and 'released [it] into the wild with no child protection safeguards'.
'And that's very typical behaviour.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Aussie traveller faces decade in jail over find in Esky en route to Christmas Island
Aussie traveller faces decade in jail over find in Esky en route to Christmas Island

7NEWS

timean hour ago

  • 7NEWS

Aussie traveller faces decade in jail over find in Esky en route to Christmas Island

Methamphetamine has been found stuffed into a bag of raw mince meat stopped at the border. It was in an Esky full of meat which arrived at Christmas Island Airport by freight late last year — now, three men face drug trafficking charges over the discovery, and could spend a collective 30 years in jail if found guilty. The 28g bag of meth, sealed in a snap-lock bag and then hidden in a freezer bag full of mince, was tested to confirm the substance and flagged with the Australian Federal Police on November 15. A West Australian man, 50, will face court on Monday over the find. It comes after two Christmas Island men — aged 24 and 30 — were charged in February over the Esky discovery. Each of the men faces one count of attempting to traffic a controlled drug, namely methamphetamine. They remain before the courts. 'Police allege the Christmas Island locals paid the WA man to purchase and send them the drugs,' AFP said. The 50-year-old WA man was stopped at Perth Airport as he returned home from regional WA on May 9 — a search warrant was executed both there and later at his Cannington home. 'A mobile phone, clothing and other items were seized,' police said The man was charged with one count of traffic a controlled drug, namely methamphetamine. Both he and the Christmas Island locals each face a decade behind bars, the maximum penalty if found guilty of their charges. AFP Inspector Dave Reis said, 'methamphetamine is a devastating drug that causes immeasurable harm to users and their families, especially in smaller communities.' 'The criminals who push these substances are driven by greed and profit and have zero regard for the lives they ruin.' ABF Superintendent Shaun Senior added that the border force is vigilant at the air freight facilities at airports, 'regardless of the size of the facility or how the packages are concealed.'

Police searched a man's laptop for malware. What they found is becoming all too common
Police searched a man's laptop for malware. What they found is becoming all too common

The Age

time7 hours ago

  • The Age

Police searched a man's laptop for malware. What they found is becoming all too common

When police searched the computer of 29-year-old IT worker Aaron Pennesi in March, they were looking for the malware he used to steal personal information from his colleagues at The Forest High School on Sydney's northern beaches. That wasn't all they found. In an all-too-common turn of events, police stumbled upon child sexual abuse material on a laptop seized for another reason. But something was different about this content. The scenes depicted weren't real. Instead, Pennesi had used a popular AI-generation website to create the child abuse material using search prompts that are too grotesque to publish. In an even more severe case, a Melbourne man was sentenced to 13 months in prison in July last year for offences including using an artificial-intelligence program to produce child abuse images. Police found the man had used an AI image-generation program and inputted text and images to create 793 realistic images. As cases involving the commercial generation of AI child abuse material that is completely original and sometimes indistinguishable from the real thing become increasingly common, one expert says the phenomenon has opened a 'vortex of doom' in law enforcement's efforts to stamp out the content online. Naive misconceptions As the tug of war over the future of AI oscillates in the court of public opinion, one of the more terrifying realities that suggests it could do more harm than good is the ease with which it enables offenders to produce and possess child sexual abuse material. The widespread adoption of image-generation models has been a boon for paedophiles seeking to access or profit from the content online. Interpol's immediate past director of cybercrime, Craig Jones, says the use of AI in child sexual abuse material online has 'skyrocketed' in the past 12 to 18 months. 'Anybody is able to use an online tool [to access child sexual abuse content], and with the advent of AI, those tools are a lot stronger. It allows offenders to do more,' Jones said. The AFP-led Australian Centre to Counter Child Exploitation, or ACCCE, received 63,547 reports of online child exploitation from July 2024 to April 2025. That's a 30 per cent increase on the previous financial year, with two months remaining. 'We're seeing quite a significant increase in what's occurring online,' AFP Acting Commander Ben Moses says, noting that those statistics don't differentiate between synthetic and real child abuse content. Loading That's in line with the legal treatment of the issue; possessing or creating the content in either form is punishable under the same offences. But a common misconception is that AI-generated material shouldn't be taken as seriously or is not as harmful as the traditional type because no child is abused in the creation of the material. Moses says that while identifying real victims will always be the ACCCE's priority, AI-generated content is being weaponised against real children. 'It can still be very harmful and horrific. [It] can include the ability … to generate abuse in relation to people they know. For those victims, it has significant consequences.' In 2024, a British man was jailed for 18 years for turning photographs of real children, some younger than 13, into images to sell to other paedophiles online. The sentencing judge called the images 'chilling'. In another British example, a BBC report in 2024 found evidence that an adults-only VR sex simulator game was being used to create child models for use in explicit sex scenes, and that models had been based on photos taken of real girls in public places. 'The other aspect of it, and what may not be well known, is cases where innocent images of children have been edited to appear sexually explicit, and those photos are then used to blackmail children into providing other intimate content,' Moses says. Moses says this new 'abhorrent' form of sextortion, and how it opens up new ways for offenders to victimise minors, is of great concern to the ACCCE. Professor Michael Salter, the director of Childlight UNSW, the Australasian branch of the Global Child Safety Institute, calls the misconception that AI-generated abuse material is less harmful 'really naive'. 'The forensic evidence says that it is a serious risk to children.' 'The emergence of AI has been something of a vortex of doom in the online Child Protection space.' Professor Michael Salter Salter says the demand for synthetic material primarily comes from serious offenders and that, generally, they also possess actual child sexual abuse content. 'It's also important to understand that a lot of the material that they're creating is extremely egregious because they can create whatever they want,' he said. 'The sort of material they're creating is extremely violent, it's extremely sadistic, and it can include imagery of actual children they want to abuse.' Tech-savvy paedophiles AI child sexual abuse material first crossed Michael Salter's desk around five years ago. In that time, he's witnessed how offenders adapt to new technologies. As AI advanced, so did the opportunities for paedophiles. He explains that AI was first used to sharpen older material and later to create new images of existing victims. It has now proliferated into offenders training their own engines or using commercially available image-generation sites to create brand-new material. This can include deepfake videos featuring real people. But Salter says what is more common is still-image generation that is frighteningly readily available. 'We have commercial image generation sites that you can go to right now, and you don't even have to look for child sexual abuse material because the generation of [it] is so popular that these sites often have trending pages, and I've seen sections where the keyword is 'pre-teen', or 'tween', or 'very young'.' In a 2024 report, the Internet Watch Foundation (IWF) found a 380 per cent increase in reported cases of AI-generated child sexual abuse content online, noting that the material was becoming 'significantly more realistic' and that perpetrators were finding 'more success generating complex 'hardcore' scenarios' involving penetrative sexual activity, bestiality or sadism. 'One user shared an anonymous webpage containing links to fine-tuned models for 128 different named victims of child sexual abuse.' Internet Watch Foundation's July 2024 AI child sexual abuse material report The IWF found evidence that AI models that depict known child abuse victims and famous children were being created and shared online. In some of the most perverse cases, this could include the re-victimisation of 'popular' real-life child abuse victims, with AI models allowing perpetrators to generate new images of an abused minor. Loading The report acknowledged that the usage of these fine-tuned models, known as LoRAs, was likely to go much deeper than the IWF could assess, thanks to end-to-end encrypted peer-to-peer networks that were essentially inaccessible. Moreover, Australia's eSafety Commission warns that child sexual abuse material produced by AI is 'highly scalable'. '[It requires] little effort to reproduce en masse once a model is capable of generating illegal imagery,' a spokesperson said. Commercial interests The rapid escalation of the amount of content available online is partially attributed to how AI has enabled the commercialisation of child sexual abuse material. 'Offenders who are quite adept at creating material are essentially taking orders to produce content, and this material is increasingly realistic,' Salter says. Jones says that in the span of his career, he's seen the provision of child sexual abuse content go from physical photocopies being shared in small groups to it being available online in a couple of clicks. 'Unfortunately, there is a particular audience for child sexual abuse material, and what AI can do is generate that content, so [commercialisation] is serving a demand that is out there.' In one of the biggest stings involving an AI-child abuse enterprise, Danish police, in conjunction with Europol, uncovered a subscription service that commercialised access to the content. The global operation saw two Australian men charged, and 23 others apprehended around the world. 'There were over 237 subscribers to that one matter,' Moses says of Operation Cumberland. 'When we talk about proliferation and people profiting from this type of activity, this is of great concern to us.' Swamped by the growing sea of content, officers now face the difficulty of identifying which situations depict real children being abused, as opposed to an AI-generated child who doesn't exist. 'It also means that police have to spend quite a lot of time looking at material to determine whether it's real or not, which is quite a serious trauma risk for police as well,' Salter says. Moses from the ACCCE agrees that it's 'very difficult work' for officers. 'Whilst it is very confronting material, it doesn't compare to the trauma that child victims endure, and there's very much a focus on identifying victims.' The influx of AI-generated content has complicated its mission in many ways, Moses says, including by robbing crucial resources from ACCCE's primary goal of rescuing children who are being abused. 'It takes a lot of time to identify real victims, and the concern for us … is that the [AI-generated content] is becoming increasingly harder [to detect], and it takes time away from our people who are trying to identify real victims.' Law enforcement 'overwhelmed' While prosecutions for offences involving fake abuse material have increased, the rate hasn't kept up with the pace of the increase in the amount of content found online. Salter says resourcing is one of the biggest challenges facing law enforcement. 'Law enforcement is so overwhelmed with really egregious online sexual exploitation cases … their primary priority is to prevent and rescue the abuse of actual kids.' He says it's a struggle he's heard across all jurisdictions. 'They're really struggling in terms of people power, in terms of access to the technology that they need to conduct these investigations and to store that amount of material,' Salter says. 'There needs to be a huge uplift right across the law enforcement space.' Additionally, AI-generated child sexual abuse content requires a whole reset of the way the content is detected. Old machine methods of detecting the content online involved scraping for verified abuse content, which means it has to have already been assessed by a human as illegal content to be detected. 'The obvious challenge we see with AI-generated material is that it's all new, and so it's very unlikely, through current detection technologies, that we can proactively screen it,' Salter says. Unregulated threat let loose It's a global issue that crosses jurisdictions and exists on the internet's severely under-regulated new frontier. But that hasn't deterred Australia's eSafety commissioner, Julie Inman Grant, from introducing world-first industry standards to hold tech companies to account for the content they platform. The standards came into force in December 2024 and require storage services such as Apple's iCloud and Google Drive, messaging services, and online marketplaces that offer generative AI models to prevent their products from being misused to store or distribute child sexual abuse material and pro-terror content. 'We have engaged with both AI purveyors and the platforms and libraries that host them to ensure they are aware of their obligations under the standards,' an eSafety commission spokesperson said. 'We believe the standards are a significant step in regulating unlawful and seriously harmful content and align with our broader efforts to ensure that AI tools, such as those used to create deepfakes, are held to the highest safety standards.' The recent passage of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 also expanded on the available criminal offences relating to non-consensual, sexually explicit AI-generated material. While international companies can face multimillion-dollar penalties for breaches of the eSafety Commission's standards in Australia, major tech players such as Meta are increasingly adopting end-to-end encryption, which means even the companies themselves can't see what content they're hosting, let alone law enforcement. Interpol works at the forefront of these issues, often acting as a bridge between authorities and the private sector. Jones observes that while interventions such as Australia's new standards play an important role in setting high standards for tech companies, encryption and other privacy policies make it 'very hard for law enforcement to get those data sets'. International co-operation is crucial for successfully prosecuting commercial child sexual abuse content cases, and Jones says that in best practice examples, when a global chain is identified, the tech industry is brought in as part of the investigation. 'I'm seeing more of an involvement in the tech sector around supporting law enforcement. But that's sometimes at odds with encryption and things like that,' Jones says. Loading 'I think the tech industry has a duty of care to the communities that they serve. So I don't think it's good enough to say, 'Oh, well, it's encrypted. We don't know what's there.' ' Salter takes a more pessimistic view of the tech industry's actions, arguing that most companies are moving away from, not towards, proactively monitoring the presence of child sexual abuse content. 'The emergence of AI has been something of a vortex of doom in the online child protection space,' Salter says. Online child protection efforts were already overwhelmed, he says, before the tech sector 'created a new threat to children' and 'released [it] into the wild with no child protection safeguards'. 'And that's very typical behaviour.'

Probe into Indigenous man's death after airport custody
Probe into Indigenous man's death after airport custody

The Advertiser

time16 hours ago

  • The Advertiser

Probe into Indigenous man's death after airport custody

An Indigenous man has died in hospital after federal police stopped him from boarding a plane for allegedly being intoxicated, with a death-in-custody investigation underway. It's the second death in custody in a fortnight in the Northern Territory, following that of young Aboriginal man Kumanjayi White on May 27 after he was restrained by plain-clothes NT officers in an Alice Springs supermarket. On May 30, a 68-year-old was prevented from joining a flight out of Darwin after federal officers received reports of him being intoxicated, Northern Territory Police said in a statement. He was taken into protective custody and driven to the Palmerston Watchhouse where the custody sergeant and nurse deemed it necessary to take him to Royal Darwin Hospital for assessment. Upon arrival at the hospital federal officers noticed the man had lost consciousness, prompting medical staff to immediately commence CPR, with resuscitation efforts proving successful. He was transferred to the Intensive Care Unit in a stable condition for ongoing treatment for what was a suspected medical event, NT Police said. The man died in the ICU on Saturday and NT Police said his next of kin had been notified. "The cause of the man's death remains undetermined pending a post mortem (examination)," territory police said. "The incident is being investigated as a death in custody as the man was in the custody of the AFP at the time of him first losing consciousness." An AFP spokesperson said in a statement on Sunday individuals in protective custody are not under arrest and the man was "not restrained at any point by AFP officers". The AFP confirmed the man was detained under the public intoxication section of the NT Police Administration Act. The man was taken to hospital so he could be "monitored while sobering up" and he presented no medical concerns while being transported, the spokesperson said. But when he arrived at the hospital he "experienced a sudden and serious medical episode" and lost consciousness. The officers involved are being supported by AFP welfare officers. NT Police will prepare a report for the coroner. In the case of Mr White, rallies continue across the country to demand justice, calling for an independent inquiry into his death. 13YARN 13 92 76 Lifeline 13 11 14 An Indigenous man has died in hospital after federal police stopped him from boarding a plane for allegedly being intoxicated, with a death-in-custody investigation underway. It's the second death in custody in a fortnight in the Northern Territory, following that of young Aboriginal man Kumanjayi White on May 27 after he was restrained by plain-clothes NT officers in an Alice Springs supermarket. On May 30, a 68-year-old was prevented from joining a flight out of Darwin after federal officers received reports of him being intoxicated, Northern Territory Police said in a statement. He was taken into protective custody and driven to the Palmerston Watchhouse where the custody sergeant and nurse deemed it necessary to take him to Royal Darwin Hospital for assessment. Upon arrival at the hospital federal officers noticed the man had lost consciousness, prompting medical staff to immediately commence CPR, with resuscitation efforts proving successful. He was transferred to the Intensive Care Unit in a stable condition for ongoing treatment for what was a suspected medical event, NT Police said. The man died in the ICU on Saturday and NT Police said his next of kin had been notified. "The cause of the man's death remains undetermined pending a post mortem (examination)," territory police said. "The incident is being investigated as a death in custody as the man was in the custody of the AFP at the time of him first losing consciousness." An AFP spokesperson said in a statement on Sunday individuals in protective custody are not under arrest and the man was "not restrained at any point by AFP officers". The AFP confirmed the man was detained under the public intoxication section of the NT Police Administration Act. The man was taken to hospital so he could be "monitored while sobering up" and he presented no medical concerns while being transported, the spokesperson said. But when he arrived at the hospital he "experienced a sudden and serious medical episode" and lost consciousness. The officers involved are being supported by AFP welfare officers. NT Police will prepare a report for the coroner. In the case of Mr White, rallies continue across the country to demand justice, calling for an independent inquiry into his death. 13YARN 13 92 76 Lifeline 13 11 14 An Indigenous man has died in hospital after federal police stopped him from boarding a plane for allegedly being intoxicated, with a death-in-custody investigation underway. It's the second death in custody in a fortnight in the Northern Territory, following that of young Aboriginal man Kumanjayi White on May 27 after he was restrained by plain-clothes NT officers in an Alice Springs supermarket. On May 30, a 68-year-old was prevented from joining a flight out of Darwin after federal officers received reports of him being intoxicated, Northern Territory Police said in a statement. He was taken into protective custody and driven to the Palmerston Watchhouse where the custody sergeant and nurse deemed it necessary to take him to Royal Darwin Hospital for assessment. Upon arrival at the hospital federal officers noticed the man had lost consciousness, prompting medical staff to immediately commence CPR, with resuscitation efforts proving successful. He was transferred to the Intensive Care Unit in a stable condition for ongoing treatment for what was a suspected medical event, NT Police said. The man died in the ICU on Saturday and NT Police said his next of kin had been notified. "The cause of the man's death remains undetermined pending a post mortem (examination)," territory police said. "The incident is being investigated as a death in custody as the man was in the custody of the AFP at the time of him first losing consciousness." An AFP spokesperson said in a statement on Sunday individuals in protective custody are not under arrest and the man was "not restrained at any point by AFP officers". The AFP confirmed the man was detained under the public intoxication section of the NT Police Administration Act. The man was taken to hospital so he could be "monitored while sobering up" and he presented no medical concerns while being transported, the spokesperson said. But when he arrived at the hospital he "experienced a sudden and serious medical episode" and lost consciousness. The officers involved are being supported by AFP welfare officers. NT Police will prepare a report for the coroner. In the case of Mr White, rallies continue across the country to demand justice, calling for an independent inquiry into his death. 13YARN 13 92 76 Lifeline 13 11 14 An Indigenous man has died in hospital after federal police stopped him from boarding a plane for allegedly being intoxicated, with a death-in-custody investigation underway. It's the second death in custody in a fortnight in the Northern Territory, following that of young Aboriginal man Kumanjayi White on May 27 after he was restrained by plain-clothes NT officers in an Alice Springs supermarket. On May 30, a 68-year-old was prevented from joining a flight out of Darwin after federal officers received reports of him being intoxicated, Northern Territory Police said in a statement. He was taken into protective custody and driven to the Palmerston Watchhouse where the custody sergeant and nurse deemed it necessary to take him to Royal Darwin Hospital for assessment. Upon arrival at the hospital federal officers noticed the man had lost consciousness, prompting medical staff to immediately commence CPR, with resuscitation efforts proving successful. He was transferred to the Intensive Care Unit in a stable condition for ongoing treatment for what was a suspected medical event, NT Police said. The man died in the ICU on Saturday and NT Police said his next of kin had been notified. "The cause of the man's death remains undetermined pending a post mortem (examination)," territory police said. "The incident is being investigated as a death in custody as the man was in the custody of the AFP at the time of him first losing consciousness." An AFP spokesperson said in a statement on Sunday individuals in protective custody are not under arrest and the man was "not restrained at any point by AFP officers". The AFP confirmed the man was detained under the public intoxication section of the NT Police Administration Act. The man was taken to hospital so he could be "monitored while sobering up" and he presented no medical concerns while being transported, the spokesperson said. But when he arrived at the hospital he "experienced a sudden and serious medical episode" and lost consciousness. The officers involved are being supported by AFP welfare officers. NT Police will prepare a report for the coroner. In the case of Mr White, rallies continue across the country to demand justice, calling for an independent inquiry into his death. 13YARN 13 92 76 Lifeline 13 11 14

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store