
Man accused of using AI images of women for sextortion
Benjamin Michael Jomaa is accused of sending sexual material to the women without their consent on Facebook messenger and threatening to distribute AI-manipulated images of them online.
The 31-year-old allegedly demanded to be sent more intimate pictures and encouraged one of the women to participate in a sexual act without her consent.
Sexual extortion or 'sextortion' is a form of blackmail where a person threatens to share a nude or sexual image or video unless their victim gives in to their demands.
Police searched Jomaa's home on May 28 at Ettalong Beach on the NSW Central Coast and seized several electronic devices.
He was arrested and charged with 13 offences.
The three women, all aged in their 20s, were allegedly abused by Jomaa online between October 2024 and April 2025.
Jomaa was granted bail after the arrest.
He appeared briefly in Gosford Local Court for the first time on Wednesday, represented by his lawyer Jessica Tohi.
Supported by two people, Jomaa walked out of the court building hidden underneath a coat.
He has been excused from attending court when the matter returns in two weeks and will remain on bail.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Canberra Times
2 hours ago
- Canberra Times
Workers call for support, training as AI 'irresistible'
More than 80 per cent of workers reported not having been offered any AI training or consultation on its use at work, despite almost 50 per cent of workplaces introducing these tools.


Perth Now
2 hours ago
- Perth Now
Workers call for support, training as AI 'irresistible'
Workers are calling for support to adapt to the adoption of artificial intelligence, with most saying they have not been offered training despite its increasing use. The Australian Services Union says surveyed white-collar workers are willing to adapt to using the technology but have been left with little guidance on how to navigate the change. More than 80 per cent of workers reported not having been offered any AI training or consultation on its use at work, despite almost 50 per cent of workplaces introducing these tools. Less than 35 per cent of workplaces have a formal policy guiding the technology's use, according to the research findings. The union's national secretary Emeline Gaske says artificial intelligence offers huge productivity opportunities to drive new jobs but the benefits need to extend beyond company profits. "Unions can't resist AI anymore than we could've resisted the internet," Ms Gaske said. "But it must come with an industry adjustment package that starts with workforce training." The call for more support and consultation comes ahead of a productivity roundtable in Canberra from Tuesday, with AI likely to be extensively discussed. The three-day summit, led by Treasurer Jim Chalmers and other ministers, will focus on boosting productivity in the economy, building resilience and strengthening the budget. It could lead to investigating the use of AI to streamline environmental measures and boost housing approvals, according to reported Treasury advice leaked to ABC News. More than 30,000 vocational education teachers could be trained to use AI under a recently announced year-long trial involving US tech giant Microsoft and Australia's Future Skills Organisation. The organisation has similarly noted the rate of white-collar workers using AI is more than double the rate of those who have been trained to do so. Ms Gaske says workers need to share in the benefits of AI, such as through shorter working weeks to give workers time back from productivity improvements, and has called for further consultation. "We have a willing and capable workforce but they need to be brought into the conversation," she said. "By investing in training, guaranteeing workers a say, and ensuring the gains are shared, we can boost productivity and improve the quality of jobs right across the economy." One surveyed worker reported widespread fear of the technology. "But if the right guidelines are in place I think it could be very useful," they said.

ABC News
16 hours ago
- ABC News
Senior lawyer apologises after filing AI-generated submissions in Victorian murder case
A senior lawyer has apologised to a Victorian judge for filing submissions in a murder case that included fake quotes and non-existent case judgements generated by artificial intelligence (AI). Defence lawyer Rishi Nathwani, who holds the title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Mr Nathwani told Justice James Elliott on Wednesday, on behalf of the defence team. The AI-generated errors caused a 24-hour delay in resolving a case that Justice Elliott had hoped to conclude on Wednesday. He later ruled on Thursday that Mr Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Justice Elliott told lawyers on Thursday. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by the Justice's associates, who couldn't find the cases and requested that defence lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes", court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who did not check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Justice Elliott said. The court documents do not identify the generative AI system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $US5,000 ($7,600) fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases", perverting the course of justice, which carries a maximum sentence of life in prison. AP