
Activist aid ship 'hit by drones on way to Gaza'
A ship bound for Gaza carrying humanitarian aid and activists has been bombed by drones in international waters off Malta, its organisers say, alleging Israel was to blame.
The Israeli foreign ministry did not respond to a request for comment on the allegation by the Freedom Flotilla Coalition, an international non-governmental group.
The Maltese government said the vessel and its crew were secured in the early hours of Friday morning after a nearby tug assisted with firefighting operations.
Turkey's foreign ministry said Turkish nationals were on board at the time of the incident and it was working with Maltese authorities to transfer them to a safe location.
"We condemn in the strongest terms this attack on a civilian ship," it said, noting there were "allegations that the ship was targeted by Israeli drones".
"All necessary efforts will be made to reveal the details of the attack as soon as possible and to bring the perpetrators to justice," it said.
A ceasefire between Israel and Hamas broke down in March, with both sides blaming each other, and Israel sent troops back into Gaza and resumed airstrikes.
The NGO published video footage, filmed in darkness, showing a fire on one of its ships, the Conscience. The footage showed lights in the sky in front of the ship and the sound of explosions could be heard.
"Israeli ambassadors must be summoned and answer to violations of international law, including the ongoing blockade (of Gaza) and the bombing of our civilian vessel in international waters," it said.
The Maltese government said maritime authorities had received a mayday call shortly after midnight local time from a vessel outside of territorial waters, with 12 crew members and four civilians on board, reporting a fire.
It said a nearby tug headed to the scene and launched firefighting operations and a Maltese patrol vessel was dispatched. After several hours, the vessel and its crew were secure, it said, adding that crew had refused to board the tug.
A spokeswoman for the NGO, Caoimhe Butterly, said the attack took place as the ship was preparing for activists to board from another vessel. A transfer at sea had been planned rather than the ship going to harbour, for bureaucratic reasons, she said.
The coalition said it had been organising a non-violent action under a media blackout in order to avoid any potential sabotage.
The Gaza war started after Hamas-led fighters killed 1200 people and took 251 hostages to Gaza in the October 7, 2023 attacks, according to Israeli tallies. Since then, Israel's offensive on the enclave killed more than 52,000, according to Palestinian health officials.
Since March 2, Israel has completely cut off all supplies to the 2.3 million residents of the enclave, and food stockpiled during a ceasefire at the start of the year has all but run out, according to international aid agencies.
Israel accuses the Hamas militants who have run Gaza of exploiting aid - which Hamas denies - and says it must keep all supplies out to prevent the fighters from getting it.
Another coalition ship on a similar mission to Gaza in 2010 was stopped and boarded by Israeli troops, and nine activists died. Other ships have similarly been stopped and boarded, without loss of life.
Hamas issued a statement about the incident off Malta, accusing Israel of "piracy" and "state terrorism".
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Sydney Morning Herald
17 minutes ago
- Sydney Morning Herald
As a judge, I'm sounding the alarm: the out-of-control White House is dismantling justice
For 18 years, I have been a criminal trial judge. I believe in the rule of law and independent and impartial justice in both domestic and international criminal law. The present US administration does not hold to those principles, and is now doing all it can to undermine them. In the case of international criminal law, the International Criminal Court derives from the legacy of Nuremberg where, in 1945, an International Military Tribunal established by the allied forces, tried 22 leading German officials for war crimes. Controversial though it was, the International Military Tribunal demonstrated that justice beyond borders in the form of international criminal law could be real and effective in dealing with the most horrific war crimes, and perhaps be a means of preventing them thereafter. The Nuremberg experience testified to the truth of what many, including Dr Martin Luther King Jr. had said, that 'there can be no peace without justice, and there can be no justice without peace'. In 1998, the signing of the Rome Statute for the establishment of the ICC reaffirmed the purposes and principles of the United Nations Charter. It meant that all UN member states would refrain from the threat or use of force against the territorial integrity or political independence of any state, or in any other manner inconsistent with the purposes of the UN. These are uncontroversial and necessary international principles. And so, pretty much without fear or favour, in 2002 the ICC went to work. As with all courts, an important foundation of the ICC was judicial independence and the application of the rule of law. Given the gravity of the court's work, it had to be real and impartial justice – no matter who was accused of serious crimes. The judges needed to be able to conduct investigations and trials free from any form of governmental interference. Loading Now, that is no longer guaranteed. Nor is the court's future. The Rome Statute states that 'intentionally using starvation of civilians as a method of warfare by depriving them of objects indispensable to their survival, including wilfully impeding relief supplies as provided for under the Geneva Conventions' is a war crime. On 21 November 2024, the pre-trial chamber of the Court issued warrants for the arrest of Hamas commander Mohammed Deif (who was killed in 2024), Israeli Prime Minister Benjamin Netanyahu and Defence Minister Yoav Gallant. The latter two face allegations of being responsible for the war crimes of starvation as a method of warfare, and of intentionally directing an attack against a civilian population. Charges also allege that from at least 8 October 2023 until at least 20 May 2024, Netanyahu and Gallant were responsible crimes against humanity of murder, persecution, and other inhumane acts. The process, if it is allowed to be completed, will tell the world whether those allegations are proved beyond reasonable doubt.

The Age
17 minutes ago
- The Age
As a judge, I'm sounding the alarm: the out-of-control White House is dismantling justice
For 18 years, I have been a criminal trial judge. I believe in the rule of law and independent and impartial justice in both domestic and international criminal law. The present US administration does not hold to those principles, and is now doing all it can to undermine them. In the case of international criminal law, the International Criminal Court derives from the legacy of Nuremberg where, in 1945, an International Military Tribunal established by the allied forces, tried 22 leading German officials for war crimes. Controversial though it was, the International Military Tribunal demonstrated that justice beyond borders in the form of international criminal law could be real and effective in dealing with the most horrific war crimes, and perhaps be a means of preventing them thereafter. The Nuremberg experience testified to the truth of what many, including Dr Martin Luther King Jr. had said, that 'there can be no peace without justice, and there can be no justice without peace'. In 1998, the signing of the Rome Statute for the establishment of the ICC reaffirmed the purposes and principles of the United Nations Charter. It meant that all UN member states would refrain from the threat or use of force against the territorial integrity or political independence of any state, or in any other manner inconsistent with the purposes of the UN. These are uncontroversial and necessary international principles. And so, pretty much without fear or favour, in 2002 the ICC went to work. As with all courts, an important foundation of the ICC was judicial independence and the application of the rule of law. Given the gravity of the court's work, it had to be real and impartial justice – no matter who was accused of serious crimes. The judges needed to be able to conduct investigations and trials free from any form of governmental interference. Loading Now, that is no longer guaranteed. Nor is the court's future. The Rome Statute states that 'intentionally using starvation of civilians as a method of warfare by depriving them of objects indispensable to their survival, including wilfully impeding relief supplies as provided for under the Geneva Conventions' is a war crime. On November 21, 2024, the pre-trial chamber of the Court issued warrants for the arrest of Hamas commander Mohammed Deif (who was killed in 2024), Israeli Prime Minister Benjamin Netanyahu and Defence Minister Yoav Gallant. The latter two face allegations of being responsible for the war crimes of starvation as a method of warfare, and of intentionally directing an attack against a civilian population. Charges also allege that from at least 8 October 2023 until at least 20 May 2024, Netanyahu and Gallant were responsible crimes against humanity of murder, persecution, and other inhumane acts. The process, if it is allowed to be completed, will tell the world whether those allegations are proved beyond reasonable doubt.

Sydney Morning Herald
31 minutes ago
- Sydney Morning Herald
Police searched a man's laptop for malware. What they found is becoming all too common
When police searched the computer of 29-year-old IT worker Aaron Pennesi in March, they were looking for the malware he used to steal personal information from his colleagues at The Forest High School on Sydney's northern beaches. That wasn't all they found. In an all-too-common turn of events, police stumbled upon child sexual abuse material on a laptop seized for another reason. But something was different about this content. The scenes depicted weren't real. Instead, Pennesi had used a popular AI-generation website to create the child abuse material using search prompts that are too grotesque to publish. In an even more severe case, a Melbourne man was sentenced to 13 months in prison in July last year for offences including using an artificial-intelligence program to produce child abuse images. Police found the man had used an AI image-generation program and inputted text and images to create 793 realistic images. As cases involving the commercial generation of AI child abuse material that is completely original and sometimes indistinguishable from the real thing become increasingly common, one expert says the phenomenon has opened a 'vortex of doom' in law enforcement's efforts to stamp out the content online. Naive misconceptions As the tug of war over the future of AI oscillates in the court of public opinion, one of the more terrifying realities that suggests it could do more harm than good is the ease with which it enables offenders to produce and possess child sexual abuse material. The widespread adoption of image-generation models has been a boon for paedophiles seeking to access or profit from the content online. Interpol's immediate past director of cybercrime, Craig Jones, says the use of AI in child sexual abuse material online has 'skyrocketed' in the past 12 to 18 months. 'Anybody is able to use an online tool [to access child sexual abuse content], and with the advent of AI, those tools are a lot stronger. It allows offenders to do more,' Jones said. The AFP-led Australian Centre to Counter Child Exploitation, or ACCCE, received 63,547 reports of online child exploitation from July 2024 to April 2025. That's a 30 per cent increase on the previous financial year, with two months remaining. 'We're seeing quite a significant increase in what's occurring online,' AFP Acting Commander Ben Moses says, noting that those statistics don't differentiate between synthetic and real child abuse content. Loading That's in line with the legal treatment of the issue; possessing or creating the content in either form is punishable under the same offences. But a common misconception is that AI-generated material shouldn't be taken as seriously or is not as harmful as the traditional type because no child is abused in the creation of the material. Moses says that while identifying real victims will always be the ACCCE's priority, AI-generated content is being weaponised against real children. 'It can still be very harmful and horrific. [It] can include the ability … to generate abuse in relation to people they know. For those victims, it has significant consequences.' In 2024, a British man was jailed for 18 years for turning photographs of real children, some younger than 13, into images to sell to other paedophiles online. The sentencing judge called the images 'chilling'. In another British example, a BBC report in 2024 found evidence that an adults-only VR sex simulator game was being used to create child models for use in explicit sex scenes, and that models had been based on photos taken of real girls in public places. 'The other aspect of it, and what may not be well known, is cases where innocent images of children have been edited to appear sexually explicit, and those photos are then used to blackmail children into providing other intimate content,' Moses says. Moses says this new 'abhorrent' form of sextortion, and how it opens up new ways for offenders to victimise minors, is of great concern to the ACCCE. Professor Michael Salter, the director of Childlight UNSW, the Australasian branch of the Global Child Safety Institute, calls the misconception that AI-generated abuse material is less harmful 'really naive'. 'The forensic evidence says that it is a serious risk to children.' 'The emergence of AI has been something of a vortex of doom in the online Child Protection space.' Professor Michael Salter Salter says the demand for synthetic material primarily comes from serious offenders and that, generally, they also possess actual child sexual abuse content. 'It's also important to understand that a lot of the material that they're creating is extremely egregious because they can create whatever they want,' he said. 'The sort of material they're creating is extremely violent, it's extremely sadistic, and it can include imagery of actual children they want to abuse.' Tech-savvy paedophiles AI child sexual abuse material first crossed Michael Salter's desk around five years ago. In that time, he's witnessed how offenders adapt to new technologies. As AI advanced, so did the opportunities for paedophiles. He explains that AI was first used to sharpen older material and later to create new images of existing victims. It has now proliferated into offenders training their own engines or using commercially available image-generation sites to create brand-new material. This can include deepfake videos featuring real people. But Salter says what is more common is still-image generation that is frighteningly readily available. 'We have commercial image generation sites that you can go to right now, and you don't even have to look for child sexual abuse material because the generation of [it] is so popular that these sites often have trending pages, and I've seen sections where the keyword is 'pre-teen', or 'tween', or 'very young'.' In a 2024 report, the Internet Watch Foundation (IWF) found a 380 per cent increase in reported cases of AI-generated child sexual abuse content online, noting that the material was becoming 'significantly more realistic' and that perpetrators were finding 'more success generating complex 'hardcore' scenarios' involving penetrative sexual activity, bestiality or sadism. 'One user shared an anonymous webpage containing links to fine-tuned models for 128 different named victims of child sexual abuse.' Internet Watch Foundation's July 2024 AI child sexual abuse material report The IWF found evidence that AI models that depict known child abuse victims and famous children were being created and shared online. In some of the most perverse cases, this could include the re-victimisation of 'popular' real-life child abuse victims, with AI models allowing perpetrators to generate new images of an abused minor. Loading The report acknowledged that the usage of these fine-tuned models, known as LoRAs, was likely to go much deeper than the IWF could assess, thanks to end-to-end encrypted peer-to-peer networks that were essentially inaccessible. Moreover, Australia's eSafety Commission warns that child sexual abuse material produced by AI is 'highly scalable'. '[It requires] little effort to reproduce en masse once a model is capable of generating illegal imagery,' a spokesperson said. Commercial interests The rapid escalation of the amount of content available online is partially attributed to how AI has enabled the commercialisation of child sexual abuse material. 'Offenders who are quite adept at creating material are essentially taking orders to produce content, and this material is increasingly realistic,' Salter says. Jones says that in the span of his career, he's seen the provision of child sexual abuse content go from physical photocopies being shared in small groups to it being available online in a couple of clicks. 'Unfortunately, there is a particular audience for child sexual abuse material, and what AI can do is generate that content, so [commercialisation] is serving a demand that is out there.' In one of the biggest stings involving an AI-child abuse enterprise, Danish police, in conjunction with Europol, uncovered a subscription service that commercialised access to the content. The global operation saw two Australian men charged, and 23 others apprehended around the world. 'There were over 237 subscribers to that one matter,' Moses says of Operation Cumberland. 'When we talk about proliferation and people profiting from this type of activity, this is of great concern to us.' Swamped by the growing sea of content, officers now face the difficulty of identifying which situations depict real children being abused, as opposed to an AI-generated child who doesn't exist. 'It also means that police have to spend quite a lot of time looking at material to determine whether it's real or not, which is quite a serious trauma risk for police as well,' Salter says. Moses from the ACCCE agrees that it's 'very difficult work' for officers. 'Whilst it is very confronting material, it doesn't compare to the trauma that child victims endure, and there's very much a focus on identifying victims.' The influx of AI-generated content has complicated its mission in many ways, Moses says, including by robbing crucial resources from ACCCE's primary goal of rescuing children who are being abused. 'It takes a lot of time to identify real victims, and the concern for us … is that the [AI-generated content] is becoming increasingly harder [to detect], and it takes time away from our people who are trying to identify real victims.' Law enforcement 'overwhelmed' While prosecutions for offences involving fake abuse material have increased, the rate hasn't kept up with the pace of the increase in the amount of content found online. Salter says resourcing is one of the biggest challenges facing law enforcement. 'Law enforcement is so overwhelmed with really egregious online sexual exploitation cases … their primary priority is to prevent and rescue the abuse of actual kids.' He says it's a struggle he's heard across all jurisdictions. 'They're really struggling in terms of people power, in terms of access to the technology that they need to conduct these investigations and to store that amount of material,' Salter says. 'There needs to be a huge uplift right across the law enforcement space.' Additionally, AI-generated child sexual abuse content requires a whole reset of the way the content is detected. Old machine methods of detecting the content online involved scraping for verified abuse content, which means it has to have already been assessed by a human as illegal content to be detected. 'The obvious challenge we see with AI-generated material is that it's all new, and so it's very unlikely, through current detection technologies, that we can proactively screen it,' Salter says. Unregulated threat let loose It's a global issue that crosses jurisdictions and exists on the internet's severely under-regulated new frontier. But that hasn't deterred Australia's eSafety commissioner, Julie Inman Grant, from introducing world-first industry standards to hold tech companies to account for the content they platform. The standards came into force in December 2024 and require storage services such as Apple's iCloud and Google Drive, messaging services, and online marketplaces that offer generative AI models to prevent their products from being misused to store or distribute child sexual abuse material and pro-terror content. 'We have engaged with both AI purveyors and the platforms and libraries that host them to ensure they are aware of their obligations under the standards,' an eSafety commission spokesperson said. 'We believe the standards are a significant step in regulating unlawful and seriously harmful content and align with our broader efforts to ensure that AI tools, such as those used to create deepfakes, are held to the highest safety standards.' The recent passage of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 also expanded on the available criminal offences relating to non-consensual, sexually explicit AI-generated material. While international companies can face multimillion-dollar penalties for breaches of the eSafety Commission's standards in Australia, major tech players such as Meta are increasingly adopting end-to-end encryption, which means even the companies themselves can't see what content they're hosting, let alone law enforcement. Interpol works at the forefront of these issues, often acting as a bridge between authorities and the private sector. Jones observes that while interventions such as Australia's new standards play an important role in setting high standards for tech companies, encryption and other privacy policies make it 'very hard for law enforcement to get those data sets'. International co-operation is crucial for successfully prosecuting commercial child sexual abuse content cases, and Jones says that in best practice examples, when a global chain is identified, the tech industry is brought in as part of the investigation. 'I'm seeing more of an involvement in the tech sector around supporting law enforcement. But that's sometimes at odds with encryption and things like that,' Jones says. Loading 'I think the tech industry has a duty of care to the communities that they serve. So I don't think it's good enough to say, 'Oh, well, it's encrypted. We don't know what's there.' ' Salter takes a more pessimistic view of the tech industry's actions, arguing that most companies are moving away from, not towards, proactively monitoring the presence of child sexual abuse content. 'The emergence of AI has been something of a vortex of doom in the online child protection space,' Salter says. Online child protection efforts were already overwhelmed, he says, before the tech sector 'created a new threat to children' and 'released [it] into the wild with no child protection safeguards'. 'And that's very typical behaviour.'