logo
#

Latest news with #OnlineSafetyFramework

Children's Ombudsman hugely concerned over use of AI 'nudify' apps on images of underage girls
Children's Ombudsman hugely concerned over use of AI 'nudify' apps on images of underage girls

The Journal

time03-05-2025

  • Politics
  • The Journal

Children's Ombudsman hugely concerned over use of AI 'nudify' apps on images of underage girls

THE CHILDREN'S OMBUDSMAN has said he is 'hugely concerned' about the potential of AI apps that can be used by anyone to create sexually explicit images of children. Dr Niall Muldoon has warned that stronger laws are needed to tackle the scourge of so-called 'nudification' apps, which allow real photos of women and girls to be edited by artificial intelligence to produce deepfake images that make them appear naked. Nudification apps can be downloaded via online app stores, though some have been removed by Apple and Google ; others can be accessed via a web browser by anyone who has a URL to the relevant app. Although sharing non-consensual sexual images is a crime in Ireland under the Harassment, Harmful Communications and Related Offences Act (also known as Coco's Law), legal experts have said the legislation does not cover the creation of deepfakes . Tens of thousands of ads for these apps have appeared on Facebook and Instagram in recent months, and keep pushing the apps to Irish users despite Meta's repeated attempts to remove them because they breach the company's advertising rules. 'The ease of access by children to this type of technology is a huge concern to the Ombudsman for Children's Office (OCO),' Muldoon told The Journal . 'It is difficult to comprehend any possible need for these apps when the risk of abuse and sexual exploitation of children is so high.' He called for Coimisiún na Meán and the European Commission to strengthen the oversight of major technology companies under the Digital Services Act, to ensure that the apps were not being recommended to children and young people online. A spokesperson for Coimisiún na Meán said that the Online Safety Framework makes big tech platforms accountable for how they protect people, especially children, from harm online. The European Commission's spokesperson for tech sovereignty Thomas Regnier said that the commission is aware that ads for services to create pornographic deepfakes of women were present on Facebook and Instagram. He also said large tech companies have an obligation to ensure measures are in place that mitigate risks to users. A spokesperson for Meta said the company prohibits the display of nudity or sexual activity in its ads and that the company removes ads that violate its policies, but that bad actors are continually evolving their tactics to avoid enforcement. Ombudsman for Children Dr Niall Muldoon has expressed concern Nudification apps have already attained notoriety in other countries, including in the United States, where dozens of teenage girls have been targeted in schools in California, New Jersey and Washington . Earlier this week, the children's commissioner for England called for the apps to be banned after publishing a report which found that deepfake nudification apps disproportionately target women and girls. The report contained interviews from a number of teenage girls, some of whom said they had already changed their online behaviour as a result of nudification technology. 'This chilling effect is causing them to take steps to keep themselves safe, which often requires them to limit their behaviour in some way,' the report said. 'This pattern of behaviour is similar to girls avoiding walking home alone at night, or not going to certain public places alone.' The Dublin Rape Crisis Centre previously said it was 'deeply concerned' about the capacity of deepfake images to 'amplify harm to women' and said they should not be available to download. What are nudification apps and how do they work? Nudification apps can be downloaded via app stores (if they have not already been removed), or accessed via a web browser using a URL; certain bots on the messaging app Telegram also offer nudification services. Advertisement The apps encourage users to upload a photo of any woman, and offer to produce a new, deepfake version of the same image in which the person appears without clothes. The apps are thought to have been trained using open-source artificial intelligence models in which the underlying code is freely available for anyone to copy, tweak and use for whatever purpose they want if they have the skills to do so. In the case of nudification apps, the artificial intelligence works by creating new images that are based on their attempts to replicate existing images that they have been trained on. They are specifically thought to have been trained from vast amounts of explicit images of women, which is why they tend to only work on women and teenage girls. The artificial intelligence is unable to tell when a person is underage or that such images are illegal. Graphika, a US company that tracks online disinformation, has said that open-source AI models are 'the primary driver' behind a surge in the creation and dissemination of non-consensual images of adults, including through the use of nudification apps. The UK-based Internet Watch Foundation has also said that creators of child sexual abuse material have legally used open-source AI models to create explicit deepfake images of children. An ad for a nudification app seen on Facebook Meta Ad Library Meta Ad Library Deepfake economy Graphika has also warned that nudification services and the creation of sexually explicit deepfake images has become a 'fully-fledged online industry', which some have dubbed the 'deepfake economy'. Nudification apps often seek payment to create deepfake images, while they can also be used as part of targeted harassment campaigns and for sextortion. In many cases, links to nudification services can be found through Google searches. The Journal has also uncovered thousands of targeted ads for nudification apps, which claim that apps can 'erase' or 'see through' the clothes of any woman, that are being pushed to Irish social media users on Facebook and Instagram on an ongoing basis. Advertisements entice users by claiming 'one click to undress', 'upload image, you can see anything about her' and 'your friends in transformed photos'. The ads link to app stores, where AI editing apps can be downloaded, and third-party websites that can be accessed by anyone with a URL that links to the relevant website. They often feature side-by-side images of a woman with clothes on and the same image of the woman naked or partly naked; other ads feature videos of women dancing or talking, which occasionally flash in a way that the woman appears with no clothes. Some versions of the ads use AI-generated images of women, but others use images of real women that appear to be taken from social media. The ads tend to feature on fake profiles that have small numbers of followers, but which appear to be somewhat co-ordinated: different pages will use the same names and images, or claim that they are based in similar locations. Many share different links that re-direct to the same website in an apparent attempt to avoid falling foul of Meta's advertising rules. Since the beginning of April, The Journal has found dozens of pages that have advertised nudification services via more than 20 unique links, which re-direct users to a single web-based app. Meta has removed the majority of ads for these services, though some remain active; in some cases, the ads were only removed once they were flagged by The Journal while links to those that were not shared with Meta remained online. If you have been affected by any of the issues mentioned in this article, you can reach out for support through the following helplines: Dublin Rape Crisis Centre - 1800 77 8888 (fre, 24-hour helpline) Samaritans - 116 123 or email jo@ (suicide, crisis support) Pieta - 1800 247 247 or text HELP to 51444 – (suicide, self-harm) Teenline - 1800 833 634 (for ages 13 to 19) Childline - 1800 66 66 66 (for under 18s) Readers like you are keeping these stories free for everyone... A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation. Learn More Support The Journal

Irish watchdog to prioritise child safety online in 2025
Irish watchdog to prioritise child safety online in 2025

Euronews

time25-04-2025

  • Business
  • Euronews

Irish watchdog to prioritise child safety online in 2025

ADVERTISEMENT The Irish Coimisiún na Meán, a national regulator tasked with overseeing the Digital Services Act (DSA) in Ireland, has made child safety an enforcement priority in 2025, according to its work program published on Thursday. On a national level, the regulator said it will ensure that platforms meet their obligations to keep children safe online through the implementation and supervision of the Online Safety Framework. Provisions restricting harmful content such as cyberbullying, promotion of self-harm or suicide and promotion of eating disorders, dangerous challenges and detailed provisions in relation to age assurance and parental controls, come into effect in July. The DSA – which started applying to all platforms in February of 2024 – obliges companies, among others, to comply with transparency and election integrity requirements. Related Online platforms disinformation code going formal, but X is out Online platforms fail to assess risks in annual reports, study says The European Commission oversees the 25 largest online platforms, those that have more than 45 million users per month, while the national watchdogs oversee all platforms below that threshold. Coimisiún na Meán sits as the vice-chair of an EU working group developing guidelines on protecting children under the DSA. Those guidelines are expected this year. 'We will continue our involvement in some of the European Commission's open investigations under the DSA which are looking at issues relevant to children,' the work program said. The Commission has opened a number of investigations into potential breaches of the DSA, but none of these probes have been wrapped up yet. Last year, it began investigating Meta's Facebook and Instagram because it suspects that their algorithms may stimulate behavioural addictions in children, as well as create so-called 'rabbit-hole effects'. In addition, the Commission is also concerned about age-assurance and verification methods put in place by Meta. Last month, a senior official at the Coimisiún na Meán warned that EU regulators need a common approach in enforcing the online platform rules, because the DSA leaves room for interpretation.

Media regulator has ‘full arsenal of tools' to tackle online child sex material
Media regulator has ‘full arsenal of tools' to tackle online child sex material

Irish Independent

time25-04-2025

  • Politics
  • Irish Independent

Media regulator has ‘full arsenal of tools' to tackle online child sex material

Ireland's media regulator says it has a 'full arsenal of tools' to target social media sites that publish and promote child sex abuse material and AI-generated sex abuse images. Coimisiun na Mean, which published a three-year strategy and a 2025 work programme on Thursday, said that sex abuse images appearing online are increasing in their prevalence. Niamh Hodnett, the coimisiun's online safety commissioner, said it holds social media platforms to account under the Online Safety Framework. Child sex abuse material is covered by the Digital Services Act (DSA).

Irish media regulator has 'arsenal of tools' to tackle online child sex material
Irish media regulator has 'arsenal of tools' to tackle online child sex material

Irish Daily Mirror

time24-04-2025

  • Politics
  • Irish Daily Mirror

Irish media regulator has 'arsenal of tools' to tackle online child sex material

Ireland's media regulator says it has a "full arsenal of tools" to target social media sites that publish and promote child sex abuse material and AI-generated sex abuse images. Coimisiun na Mean, which published a three-year strategy and a 2025 work programme on Thursday, said that sex abuse images appearing online are increasing in their prevalence. Niamh Hodnett, the coimisiun's online safety commissioner, said it holds social media platforms to account under the Online Safety Framework. Child sex abuse material is covered by the Digital Services Act (DSA). The coimisiun said that two of its priorities under its revised strategy include an election integrity strategy and protections for children at risk of online harm. It will soon develop a pilot programme for children at imminent risk of harm from online content, as well as develop an election integrity strategy across all media sources, and create educational materials relating to online hate. Speaking at its offices in south Dublin on Thursday, Ms Hodnett said that further online protection for children will begin in July this year when the final part of the Online Safety Code comes into effect. "This includes restricting harmful video content such as cyber-bullying, the promotion of eating or feeding disorders, the promotion of self-harm or suicide, as well as dangerous challenges," she said. "There are detailed provisions for age assurance and parental controls in relation to adult content. We'll continue to supervise platforms with respect to their compliance with the Online Safety framework. "The pilot programme is to inform our approach as to how we can best protect children online with our strategic outcome. "There was an expert report prepared in relation to individual complaints, and that advised waiting a year from when the Online Safety Code is in place before we look to putting in place an individual complaints framework and what that would look like, starting with children. "We hope, through this pilot programme, to address children who are in imminent danger and imminent harm in a systematic way. "We do it at the moment on an ad hoc basis. Indeed, even yesterday evening, my colleague John (Evans) was dealing with a priority one instance of a child in harm. "At the moment, when those types of contacts come into our contact centre, they're given priority one status, and we deal with them. But what that pilot scheme would do is to look at doing that in a systemic way, so that's something we're scoping out, and that will then inform our future work in relation to individual complaints." Ms Hodnett said they are becoming increasingly concerned about AI-generated child sex abuse material. "It's ever-increasing in its prevalence and there's two types we're concerned about. One is self-generated child sex abuse material, so that would be generated by children, often in their bedroom unbeknownst to their parents, for extortion or other purposes. "Then the other we're concerned about is AI-generated child sex abuse material. So this is being generated by artificial intelligence. Both are increasing in prevalence, and we're very concerned about that. "We are responsible for holding the platforms to account under the Online Safety Framework. Child sex abuse material is covered by the Digital Services Act, so in terms of our close supervision of the platforms, that's an area that we engage with them regularly on, and that we're very concerned about. "We have the full arsenal of tools within our playbook to be able to deal with that." She said the coimisiun recently met with OpenAI, in which they discussed concerns about online safety, including AI-generated sex abuse images. John Evans, the Digital Service Commissioner, said that OpenAI's user numbers are growing. He said: "As part of our supervision programme, we would have relationships with all of the platforms. So, say we have 15 of the 25 very large online platforms based in Ireland. So we have relationships with all of those. "Some of the others are getting bigger. So OpenAI, for example, its user numbers have been growing. "We pay attention to how they're moving, and how important they become in the ecosystem in terms of the user numbers, or the more risky they become as we perceive them because of the kind of content that they're able to provide. We will pay more and more closer attention to them. "It's a risk-based approach to supervision, there's something like 150 or so platforms out there. So we need a way of focusing our activity." Mr Evans said that when engaging with large tech firms, it would rather see a change in their behaviour rather than threatening large companies with enforcement measures or big fines. However, he said the coimisiun and the European Commission are using the "sharper tools" in their toolbox, including the investigations into some social media companies.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store