logo
What are ‘nudification' apps and how would a ban in the UK work?

What are ‘nudification' apps and how would a ban in the UK work?

Business Mayor28-04-2025

The children's commissioner for England is calling for 'nudification' apps to be banned to prevent them generating sexual imagery of children. But what are they and would a ban work?
Advances in artificial intelligence software have paved the way for the emergence of 'nudification' tools, which are becoming easier to find on social media or search engines.
These are apps and websites that produce deepfake nude images of real people using generative AI. This can involve removing clothes, getting an image to move suggestively, or pasting a head on to a naked body. The results often look highly realistic.
AI tools create images by learning from, and replicating elements of, existing images. Nudification apps are thought to be trained on vast datasets of mostly female images because they tend to work most effectively on women's bodies. As a result, 99% of sexually explicit deepfakes accessible online are estimated to be of women and girls.
Although it is illegal to possess AI-generated sexual content featuring children, the AI models that create these images are not illegal.
The children's commissioner is asking the government to legislate to ban AI tools that are designed or marketed as 'nudification' services.
This could be achieved through a number of legal mechanisms.
For example, one option would be through an amendment to the product safety and metrology bill that would ensure that providers of generative AI tools are required to carry out risk assessments for illegal and harmful activity, and to take reasonable steps to design that risk out of the product.
This would mean that tools developed using generative AI models have to be risk-assessed for illegal and harmful activity before they are available in the UK. Nudification apps in their current form are not safe against illegal activity and will not be certified to be available in the UK.
The second option is for the government to introduce an AI bill in this parliamentary session which would make it the responsibility of providers of generative AI models to prevent their use for nudifying children.
Technology companies could be legally required to test their products against whether they can be used to nudify children before launching them in the UK market, and held to account if their models are used for this purpose.
However, critics of a ban may challenge it on internet freedom grounds, said Danielle Reece-Greenhalgh, a partner at the London law firm Corker Binning. She said it could be difficult to enforce as AI models improve, making it even harder to distinguish between 'real' and 'AI' created material.
The children's commissioner does not believe that the Online Safety Act contains the provisions required to fully protect children from harm.
However, she notes that in the meantime, the risk can be partially mitigated through Ofcom's implementation of the act.
As providers of sexually explicit or pornographic material, nudification services fall under its scope. They are required to verify that users are aged over 18 before allowing them access to content. However, this would not stop adults from making images of children.
Ofcom could also strengthen its provisions to protect children from harm by ensuring it is proactive in identifying emerging harms.
Social media companies are required to carry out risk assessments to comply with the Online Safety Act. This should require them to identify and mitigate the risk to children of content produced by sexually explicit deepfake tools, including content used to promote them.
The report also asks the government to provide more support for children to report an intimate image – including false ones that have been created using AI – that has been shared in a public online space, and to get it removed. It could ask Ofcom to require technology companies to embed 'report remove' tools.
The report also suggests that sexually explicit deepfake technology be included on PSHE (personal, social, health and economic) curriculums.
READ SOURCE

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Social media giants can ‘get on' and tackle fraud cases, says City watchdog
Social media giants can ‘get on' and tackle fraud cases, says City watchdog

Yahoo

time4 days ago

  • Yahoo

Social media giants can ‘get on' and tackle fraud cases, says City watchdog

Tech giants such as Meta do not need further guidance about tackling fraud on their platforms and can 'get on and do it', the boss of the UK's financial watchdog has said at it clamps down on so-called 'finfluencers'. Nikhil Rathi, chief executive of the Financial Conduct Authority (FCA), said fraud is set to be one of the most 'profound issues' facing regulators over the next few years. The boss was asked by MPs on the Treasury Committee whether he would like to see stronger guidance to technology platforms about how to take down fraud and their responsibilities in relation to the Online Safety Act. 'I think they know what to do,' Mr Rathi told the committee. 'I don't think they need guidance. There's plenty of cases where they can get on and do it.' The Online Safety Act will require platforms to put in place and enforce safety measures to ensure that users, particularly children, do not encounter illegal or harmful content, and if they do that it is quickly removed. The FCA has stepped up its crackdown on financial influencers, or 'finfluencers', with numerous take down requests on social media platforms and a handful of arrests. The watchdog's boss was asked whether tech firms were too slow to tackle fraud on their platforms. 'We have to operate within our powers, we can't force the tech firms to take down promotions that we see as problematic and we rely on co-operation from them,' he said. 'I would not say that all tech firms don't co-operate. 'There are some that have invested very significantly, they are proactive, they are responsive and once they've decided to move we've seen significant improvements on their platform.' Referring to Facebook and Instagram owner Meta, he said the issue was both the speed at which harmful content was taken down and that new accounts were being created with 'almost identical content'. Mr Rathi said Ofcom – which oversees online platforms' safety systems – was 'understandably' prioritising child protection and sexual exploitation offences and would 'get to fraud next year'. Pressed further on tech giants being held to account on fraud, Mr Rathi said: 'I think this is going to be one of the most profound issues facing financial regulation for the next several years. 'We talk about big tech firms entering financial services, well many have already entered and provide systemic services adjacent to financial services.' Sign in to access your portfolio

Ofcom investigates notorious 4chan forum linked to mass shootings
Ofcom investigates notorious 4chan forum linked to mass shootings

Yahoo

time5 days ago

  • Yahoo

Ofcom investigates notorious 4chan forum linked to mass shootings

A notorious online message board linked to mass shootings in the United States is facing an investigation by British officials. 4chan is to be investigated by Ofcom, Britain's tech regulator, under the Online Safety Act over the suspected spread of illegal online posts. The forum, which became known for its users' vociferous support for Donald Trump and has been blamed for radicalising mass shooters in the US, is one of nine websites that are the subject of new investigations by Ofcom. The regulator said it had received reports of 'illegal content and activity' being shared on 4chan, which had not responded to its requests for information. The Online Safety Act requires websites to take action against illegal online posts, such as terror content, inciting violence, racial hatred and extreme pornography. Originally launched in 2003 by American developer Christopher Poole, who is referred to by the online moniker 'moot', 4chan is known for its lack of moderation and has long been a byword for the extreme fringes of the internet. It is associated with hacker groups and the far Right. Anonymous members of 4chan's 'Politically Incorrect' forum were enthusiastic supporters of Mr Trump during his 2016 election campaign. 4chan users were also involved in spreading conspiracy theories such as QAnon and 'Pizzagate', which promoted unsubstantiated claims of a Democratic paedophile ring. The site was also the source of claims that Jeffrey Epstein, the financier and sex trafficker, had died, 40 minutes before the news broke in the US media. 4chan was blamed by New York's attorney general for 'radicalising' 18-year-old Payton Gendron, a mass shooter who killed 10 people in Buffalo in May 2022. A report alleged 4chan had been 'formative' in 'indoctrinating him into hateful, white supremacist internet subcultures'. The investigation into 4chan comes as Ofcom ramps up its enforcement of the Online Safety Act, which came into full effect in April this year. The law gives Ofcom the power to investigate websites for failing to do enough to block and remove illegal content. Offending websites can be fined up to 10pc of their turnover, or £18m. They can also be blocked from the UK, while senior managers can receive jail terms for repeated failings. Already, several fringe websites have pulled their services from the UK after facing regulatory scrutiny from Ofcom. Gab, a social network that had courted Right-wing commentators, message board site Kiwi Farms and YouTube rival Bitchute have all blocked UK visitors, accusing the Online Safety Act of interfering with free expression. Ofcom also opened investigations into eight other sites, including a pornography provider accused of failing to stop children accessing its websites and seven file-sharing websites that have allegedly failed to stop the spread of child sexual abuse material. Since 2015, 4chan has been owned by Japanese internet entrepreneur Hiroyuki Nishimura. 4chan was contacted for comment. Broaden your horizons with award-winning British journalism. Try The Telegraph free for 1 month with unlimited access to our award-winning website, exclusive app, money-saving offers and more.

Social media giants can ‘get on' and tackle fraud cases, says City watchdog
Social media giants can ‘get on' and tackle fraud cases, says City watchdog

Yahoo

time5 days ago

  • Yahoo

Social media giants can ‘get on' and tackle fraud cases, says City watchdog

Tech giants such as Meta do not need further guidance about tackling fraud on their platforms and can 'get on and do it', the boss of the UK's financial watchdog has said at it clamps down on so-called 'finfluencers'. Nikhil Rathi, chief executive of the Financial Conduct Authority (FCA), said fraud is set to be one of the most 'profound issues' facing regulators over the next few years. The boss was asked by MPs on the Treasury Committee whether he would like to see stronger guidance to technology platforms about how to take down fraud and their responsibilities in relation to the Online Safety Act. 'I think they know what to do,' Mr Rathi told the committee. 'I don't think they need guidance. There's plenty of cases where they can get on and do it.' The Online Safety Act will require platforms to put in place and enforce safety measures to ensure that users, particularly children, do not encounter illegal or harmful content, and if they do that it is quickly removed. The FCA has stepped up its crackdown on financial influencers, or 'finfluencers', with numerous take down requests on social media platforms and a handful of arrests. The watchdog's boss was asked whether tech firms were too slow to tackle fraud on their platforms. 'We have to operate within our powers, we can't force the tech firms to take down promotions that we see as problematic and we rely on co-operation from them,' he said. 'I would not say that all tech firms don't co-operate. 'There are some that have invested very significantly, they are proactive, they are responsive and once they've decided to move we've seen significant improvements on their platform.' Referring to Facebook and Instagram owner Meta, he said the issue was both the speed at which harmful content was taken down and that new accounts were being created with 'almost identical content'. Mr Rathi said Ofcom – which oversees online platforms' safety systems – was 'understandably' prioritising child protection and sexual exploitation offences and would 'get to fraud next year'. Pressed further on tech giants being held to account on fraud, Mr Rathi said: 'I think this is going to be one of the most profound issues facing financial regulation for the next several years. 'We talk about big tech firms entering financial services, well many have already entered and provide systemic services adjacent to financial services.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store