logo
#

Latest news with #CorkerBinning

What are ‘nudification' apps and how would a ban in the UK work?
What are ‘nudification' apps and how would a ban in the UK work?

Business Mayor

time28-04-2025

  • Politics
  • Business Mayor

What are ‘nudification' apps and how would a ban in the UK work?

The children's commissioner for England is calling for 'nudification' apps to be banned to prevent them generating sexual imagery of children. But what are they and would a ban work? Advances in artificial intelligence software have paved the way for the emergence of 'nudification' tools, which are becoming easier to find on social media or search engines. These are apps and websites that produce deepfake nude images of real people using generative AI. This can involve removing clothes, getting an image to move suggestively, or pasting a head on to a naked body. The results often look highly realistic. AI tools create images by learning from, and replicating elements of, existing images. Nudification apps are thought to be trained on vast datasets of mostly female images because they tend to work most effectively on women's bodies. As a result, 99% of sexually explicit deepfakes accessible online are estimated to be of women and girls. Although it is illegal to possess AI-generated sexual content featuring children, the AI models that create these images are not illegal. The children's commissioner is asking the government to legislate to ban AI tools that are designed or marketed as 'nudification' services. This could be achieved through a number of legal mechanisms. For example, one option would be through an amendment to the product safety and metrology bill that would ensure that providers of generative AI tools are required to carry out risk assessments for illegal and harmful activity, and to take reasonable steps to design that risk out of the product. This would mean that tools developed using generative AI models have to be risk-assessed for illegal and harmful activity before they are available in the UK. Nudification apps in their current form are not safe against illegal activity and will not be certified to be available in the UK. The second option is for the government to introduce an AI bill in this parliamentary session which would make it the responsibility of providers of generative AI models to prevent their use for nudifying children. Technology companies could be legally required to test their products against whether they can be used to nudify children before launching them in the UK market, and held to account if their models are used for this purpose. However, critics of a ban may challenge it on internet freedom grounds, said Danielle Reece-Greenhalgh, a partner at the London law firm Corker Binning. She said it could be difficult to enforce as AI models improve, making it even harder to distinguish between 'real' and 'AI' created material. The children's commissioner does not believe that the Online Safety Act contains the provisions required to fully protect children from harm. However, she notes that in the meantime, the risk can be partially mitigated through Ofcom's implementation of the act. As providers of sexually explicit or pornographic material, nudification services fall under its scope. They are required to verify that users are aged over 18 before allowing them access to content. However, this would not stop adults from making images of children. Ofcom could also strengthen its provisions to protect children from harm by ensuring it is proactive in identifying emerging harms. Social media companies are required to carry out risk assessments to comply with the Online Safety Act. This should require them to identify and mitigate the risk to children of content produced by sexually explicit deepfake tools, including content used to promote them. The report also asks the government to provide more support for children to report an intimate image – including false ones that have been created using AI – that has been shared in a public online space, and to get it removed. It could ask Ofcom to require technology companies to embed 'report remove' tools. The report also suggests that sexually explicit deepfake technology be included on PSHE (personal, social, health and economic) curriculums. READ SOURCE

What are ‘nudification' apps and how would a ban in the UK work?
What are ‘nudification' apps and how would a ban in the UK work?

The Guardian

time28-04-2025

  • Politics
  • The Guardian

What are ‘nudification' apps and how would a ban in the UK work?

The children's commissioner for England is calling for 'nudification' apps to be banned to prevent them generating sexual imagery of children. But what are they and would a ban work? Advances in artificial intelligence software have paved the way for the emergence of 'nudification' tools, which are becoming easier to find on social media or search engines. These are apps and websites that produce deepfake nude images of real people using generative AI. This can involve removing clothes, getting an image to move suggestively, or pasting a head on to a naked body. The results often look highly realistic. AI tools create images by learning from, and replicating elements of, existing images. Nudification apps are thought to be trained on vast datasets of mostly female images because they tend to work most effectively on women's bodies. As a result, 99% of sexually explicit deepfakes accessible online are estimated to be of women and girls. Although it is illegal to possess AI-generated sexual content featuring children, the AI models that create these images are not illegal. The children's commissioner is asking the government to legislate to ban AI tools that are designed or marketed as 'nudification' services. This could be achieved through a number of legal mechanisms. For example, one option would be through an amendment to the product safety and metrology bill that would ensure that providers of generative AI tools are required to carry out risk assessments for illegal and harmful activity, and to take reasonable steps to design that risk out of the product. This would mean that tools developed using generative AI models have to be risk-assessed for illegal and harmful activity before they are available in the UK. Nudification apps in their current form are not safe against illegal activity and will not be certified to be available in the UK. The second option is for the government to introduce an AI bill in this parliamentary session which would make it the responsibility of providers of generative AI models to prevent their use for nudifying children. Technology companies could be legally required to test their products against whether they can be used to nudify children before launching them in the UK market, and held to account if their models are used for this purpose. However, critics of a ban may challenge it on internet freedom grounds, said Danielle Reece-Greenhalgh, a partner at the London law firm Corker Binning. She said it could be difficult to enforce as AI models improve, making it even harder to distinguish between 'real' and 'AI' created material. The children's commissioner does not believe that the Online Safety Act contains the provisions required to fully protect children from harm. However, she notes that in the meantime, the risk can be partially mitigated through Ofcom's implementation of the act. As providers of sexually explicit or pornographic material, nudification services fall under its scope. They are required to verify that users are aged over 18 before allowing them access to content. However, this would not stop adults from making images of children. Ofcom could also strengthen its provisions to protect children from harm by ensuring it is proactive in identifying emerging harms. Social media companies are required to carry out risk assessments to comply with the Online Safety Act. This should require them to identify and mitigate the risk to children of content produced by sexually explicit deepfake tools, including content used to promote them. The report also asks the government to provide more support for children to report an intimate image – including false ones that have been created using AI – that has been shared in a public online space, and to get it removed. It could ask Ofcom to require technology companies to embed 'report remove' tools. The report also suggests that sexually explicit deepfake technology be included on PSHE (personal, social, health and economic) curriculums.

Serious Fraud Office to let firms avoid prosecution if they flag up suspected crime
Serious Fraud Office to let firms avoid prosecution if they flag up suspected crime

Business Mayor

time24-04-2025

  • Business
  • Business Mayor

Serious Fraud Office to let firms avoid prosecution if they flag up suspected crime

The Serious Fraud Office (SFO) has said it is prepared to let companies avoid prosecution if they self-report suspected financial crime and cooperate with investigators, in an important change to its previous guidance. The SFO, which investigates complex financial crimes, fraud and corruption, said companies that flag potential breaches would be offered the chance to negotiate a 'deferred prosecution agreement' (DPA), apart from in some 'exceptional' circumstances. These agreements usually allow the accused to avoid prosecution, unless they reoffend or violate other terms during the agreement. Under DPAs, prosecutors agree to suspend legal proceedings in exchange for the company agreeing to conditions such as fines, compensation payments and corporate compliance programmes. Previously, companies that self-reported to the SFO still ran the risk of a criminal conviction. The new guidance aims to make it more likely that businesses will step forward to report suspected wrongdoing. Nick Ephgrave, the SFO director, said: 'If you have knowledge of wrongdoing, the gamble of keeping this to yourself has never been riskier.' The anti-fraud agency said genuine cooperation would include the preservation of digital and hard copy records, and early engagement with authorities. If a company self-reports, the SFO has said it will respond with 48 hours, decide on whether to open an investigation within six months, and conclude any DPA within six months of starting negotiations. However, legal experts have warned it would still be difficult for companies to decide whether to self-report or wait for the SFO to uncover a problem. Andrew Smith, a partner at the law firm Corker Binning, said: 'Mr Ephgrave warns companies against trying to bury their skeletons. But in the unlikely event those skeletons are discovered by the SFO, simply pleading guilty can be a more attractive outcome than an earlier self-report.' skip past newsletter promotion Sign up to Business Today Get set for the working day – we'll point you to all the business news and analysis you need every morning Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply. after newsletter promotion Ephgrave, a former Metropolitan police officer, joined the SFO as director in 2023. In October he said he wanted to improve incentives for individuals who help the SFO, such as paying whistleblowers in a US-style approach. In recent years, the agency has faced a series of big failures in some of its most high-profile cases, such as a failed prosecution of ex-Barclays directors in 2020, the collapse of a trial of ex-Serco executives, and the failure of a decade-long investigation of the mining company ENRC.

Serious Fraud Office to let firms avoid prosecution if they flag up suspected crime
Serious Fraud Office to let firms avoid prosecution if they flag up suspected crime

The Guardian

time24-04-2025

  • Business
  • The Guardian

Serious Fraud Office to let firms avoid prosecution if they flag up suspected crime

The Serious Fraud Office (SFO) has said it is prepared to let companies avoid prosecution if they self-report suspected financial crime and cooperate with investigators, in an important change to its previous guidance. The SFO, which investigates complex financial crimes, fraud and corruption, said companies that flag potential breaches would be offered the chance to negotiate a 'deferred prosecution agreement' (DPA), apart from in some 'exceptional' circumstances. These agreements usually allow the accused to avoid prosecution, unless they reoffend or violate other terms during the agreement. Under DPAs, prosecutors agree to suspend legal proceedings in exchange for the company agreeing to conditions such as fines, compensation payments and corporate compliance programmes. Previously, companies that self-reported to the SFO still ran the risk of a criminal conviction. The new guidance aims to make it more likely that businesses will step forward to report suspected wrongdoing. Nick Ephgrave, the SFO director, said: 'If you have knowledge of wrongdoing, the gamble of keeping this to yourself has never been riskier.' The anti-fraud agency said genuine cooperation would include the preservation of digital and hard copy records, and early engagement with authorities. If a company self-reports, the SFO has said it will respond with 48 hours, decide on whether to open an investigation within six months, and conclude any DPA within six months of starting negotiations. However, legal experts have warned it would still be difficult for companies to decide whether to self-report or wait for the SFO to uncover a problem. Andrew Smith, a partner at the law firm Corker Binning, said: 'Mr Ephgrave warns companies against trying to bury their skeletons. But in the unlikely event those skeletons are discovered by the SFO, simply pleading guilty can be a more attractive outcome than an earlier self-report.' Sign up to Business Today Get set for the working day – we'll point you to all the business news and analysis you need every morning after newsletter promotion Ephgrave, a former Metropolitan police officer, joined the SFO as director in 2023. In October he said he wanted to improve incentives for individuals who help the SFO, such as paying whistleblowers in a US-style approach. In recent years, the agency has faced a series of big failures in some of its most high-profile cases, such as a failed prosecution of ex-Barclays directors in 2020, the collapse of a trial of ex-Serco executives, and the failure of a decade-long investigation of the mining company ENRC.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store