26-05-2025
Man who posted deepfake images of prominent Australian women could face $450,000 penalty
The online safety regulator wants a $450,000 maximum penalty imposed on a man who posted deepfake images of prominent Australian women to a website, in the first case of its kind heard in an Australian court.
The eSafety commissioner has launched proceedings against Anthony Rotondo over his failure to remove 'intimate images' of several prominent Australian women from a deepfake pornography website.
The federal court has kept the names of the women confidential.
Rotondo initially refused to comply with the order while he was based in the Philippines, the court heard, but the commissioner launched the case once he returned to Australia.
Rotondo posted the images to the MrDeepFakes website, which has since been shut down.
In December 2023, Rotondo was fined for contempt of court, after admitting he breached court orders by not removing the imagery. He later shared his password so the deepfake images could be removed.
Sign up for Guardian Australia's breaking news email
A spokesperson for the eSafety commissioner said the regulator was seeking between $400,000 and $450,000 for the breaches of the Online Safety Act.
The spokesperson said the penalty submission reflected the seriousness of the breaches 'and the significant impacts on the women targeted'.
'The penalty will deter others from engaging in such harmful conduct,' they said.
eSafety said the non-consensual creation and sharing of explicit deepfake images caused significant psychological and emotional distress for victims.
The penalties hearing was held on Monday, and the court has reserved its decision.
Separately, federal criminal laws were passed in 2024 to combat explicit deepfakes.
In her opening statement to the Senate committee reviewing the bill in July last year, the eSafety commissioner, Julie Inman Grant, said deepfakes had increased on the internet by 550% since 2019, and pornographic videos made up 99% of the deepfake material online, with 99% of that imagery of women and girls.
'Deepfake image based abuse is not only becoming more prevalent but is also very gendered and incredibly distressing to the victim-survivor,' Inman Grant said.
'Shockingly, thousands of open-source AI apps like these have proliferated online and are often free and easy to use by anyone with a smartphone.
'So these apps make it simple and cost-free for the perpetrator, while the cost to the target is one of lingering and incalculable devastation.'