logo
#

Latest news with #deepfake

AI Apps Are Undressing Women Without Consent And It's A Problem
AI Apps Are Undressing Women Without Consent And It's A Problem

Forbes

time2 days ago

  • Forbes

AI Apps Are Undressing Women Without Consent And It's A Problem

AI nudification apps are making it frighteningly easy to create fake sexualized images of women and ... More teens, sparking a surge in abuse, blackmail and online exploitation. The rise of AI 'nudification' tools makes it shockingly easy for anyone to create a fake naked image of you—or any of your family, friends or colleagues—using nothing more than a photo and one of many readily available AI apps. The existence of tools that let users create non-consensual sexualized images might seem like an inevitable consequence of the development of AI image generation. But with 15 million downloads since 2022, and deepfaked nude content increasingly used to bully victims and expose them to danger, it's not a problem that society can or should ignore. There have been calls for apps to be banned, and criminal penalties for creating and spreading non-consensual intimate images have been introduced in some countries. But this has done little to stem the flood, with one in four 13 to 19-year-olds reportedly exposed to fake, sexualized images of someone they know. Let's look at how these tools work, what the real risks are, and what steps we should be taking to minimize the harms that are already being caused. What Are Nudification Apps And What Are The Dangers? Nudification apps use AI to create naked or sexualized images of people from the sort of everyday, fully-clothed images that anyone might upload to Facebook, Instagram or LinkedIn. While men are occasionally the targets, research suggests that 99 per cent of non-consensual, sexualized deepfakes feature women and girls. Overwhelmingly, it's used as a form of abuse to bully, coerce or extort victims. Media coverage frequently suggests that this is increasingly having a real impact on women's lives. While faked nude images can be humiliating and potentially career-affecting for anyone, in some parts of the world, it could leave women at risk of criminal prosecution or even serious violence. Another shocking factor is the growing number of fake images of minors that are being created, which may or may not be derived from images of real children. The Internet Watch Foundation reported a 400 percent rise in the number of URLs hosting AI-generated child sex abuse content in the first six months of 2025. This type of content is seen as particularly dangerous, even when no real children are involved, with experts saying it can normalize abusive images, fuel demand, and complicate law enforcement investigations. Unfortunately, media reports suggest that criminals have a clear financial incentive to get involved, with some making millions of dollars from selling fake content. So, given the simplicity and scale with which these images can be created, and the devastating consequences they can have on lives, what's being done to stop it? How Are Service Providers And Legislators Reacting? Efforts to tackle the issue through regulation are underway in many jurisdictions, but so far, progress has been uneven. In the US, the Take It Down Act makes online services, including social media, responsible for taking down non-consensual deepfakes when asked to do so. And some states, including California and Minnesota, have passed laws making it illegal to distribute sexually explicit deepfakes. In the UK, there are proposals to take matters further by imposing penalties for making, not simply distributing, non-consensual deepfakes, as well as an outright ban on nudification apps themselves. However, it isn't clear how the tools would be defined and differentiated from AI used for legitimate creative purposes. China's generative AI measures contain several provisions aimed at mitigating the harm of non-consensual deepfakes. Among these are requirements that tools should have built-in safeguards to detect and block illegal use, and that AI content should be watermarked in a way that allows its origin to be traced. One frustration for those campaigning for a solution is that authorities haven't always seemed willing to treat AI-generated image abuse as seriously as they would photographic image abuse, due to a perception that it 'isn't real'. In Australia, this prompted the government commissioner for online safety to call on schools to ensure all incidents are reported to police as sex crimes against children. Of course, online service providers have a hugely important role to play, too. Just this month, Meta announced that it is suing the makers of the CrushAI app for attempting to circumvent its restrictions on promoting nudification apps on its Facebook platform. This came after online investigators found that the makers of these apps are frequently able to evade measures put in place by service providers to limit their reach. What Can The Rest Of Us Do? The rise of AI nudification apps should act as a warning that transformative technologies like AI can change society in ways that aren't always welcome. But we should also remember that the post-truth age and 'the end of privacy" are just possible futures, not guaranteed outcomes. How the future turns out will depend on what we decide is acceptable or unacceptable now, and the actions we take to uphold those decisions. From a societal point of view, this means education. Critically, there should be a focus on the behavior and attitudes of school-age children to help make them aware of the harm that can be caused. From a business point of view, it means developing an awareness of how this technology can impact workers, particularly women. HR policies should ensure there are systems and policies in place to help those who may become victims of blackmail or harassment campaigns involving deepfaked images or videos. And technological solutions have a role to play in detecting when these images are transferred and uploaded, and potentially removing them before they can cause harm. Watermarking, filtering and collaborative community moderation could all be part of the solution. Failing to act decisively now will mean that deepfakes, nude or otherwise, are likely to become an increasingly problematic part of everyday life.

Why NZ should be clamping down on deepfake images
Why NZ should be clamping down on deepfake images

RNZ News

time3 days ago

  • Entertainment
  • RNZ News

Why NZ should be clamping down on deepfake images

technology 42 minutes ago Denmark is set to clamp down on deepfake images, giving citizens copyright over their likeness and voice. A deepfake is an image, video or audio recording which has been digitally altered to appear to be someone else. They have become increasingly sophisticated with the rapid advancement of AI, and recent studies have shown that the vast majority of people can't distinguish between deepfake and real images So are New Zealand laws fit for purpose? Jesse finds out.

Schoolboy, 17, investigated on suspicion of using AI to make deepfake nudes of his female classmates
Schoolboy, 17, investigated on suspicion of using AI to make deepfake nudes of his female classmates

Daily Mail​

time3 days ago

  • Politics
  • Daily Mail​

Schoolboy, 17, investigated on suspicion of using AI to make deepfake nudes of his female classmates

A 17-year-old schoolboy in Spain is under investigation after allegedly using artificial intelligence to create deepfake nude images of his female classmates, which he is suspected of selling online. The investigation began after 16 young women, all students at an educational institute in Valencia, southeastern Spain, reported disturbing incidents of AI-generated sexual images of themselves circulating on social media. The images showed the minors naked and were allegedly being sold to others. The first complaint was lodged in December, when a teenage girl informed police that an account had been created under her name, with AI-generated videos and images depicting her in a compromising position. 'Photos of various people, all of them minors, appeared on this account. All these photos had been modified from the originals, which had been manipulated so that the people in them appeared completely naked,' the Spanish Civil Guard said in a statement. The suspect, a 17-year-old boy, is now facing investigation for the alleged corruption of minors. Authorities are continuing to gather evidence to determine whether he is responsible for creating and distributing the explicit images. This alarming case comes at a time when AI-driven sexual exploitation is on the rise, particularly among minors. Spain is no stranger to this phenomenon. In 2023, a similar case in Extremadura saw 15 minors investigated for using AI to create explicit images of their female schoolmates. The offenders were later sentenced to probation. The deepfake issue is not confined to Spain. Celebrities around the world, including pop stars like Taylor Swift and politicians like US Congresswoman Alexandria Ocasio-Cortez, have fallen victim to AI-generated pornography. In the UK, more than 250 British celebrities were targeted by a Channel 4 investigation that exposed how their faces were superimposed onto explicit videos using AI. Although the Spanish government pledged in March 2023 to introduce laws to criminalise the creation of AI-generated sexual content without consent, the bill has yet to be passed by parliament. Currently, cases like these often fall into legal limbo, with existing laws not explicitly addressing the issue of AI-manipulated imagery. In the UK, however, the Online Safety Act 2023 has criminalised the sharing of explicit deepfake content without consent. Offenders who create or share such material maliciously now face criminal charges, with the possibility of imprisonment and unlimited fines. 'It is unacceptable that one in three women have been victims of online abuse. This demeaning and disgusting form of chauvinism must not become normalised,' said Victims Minister Alex Davies-Jones. 'We are bearing down on violence against women – whatever form it takes.' Baroness Jones, the UK's Technology Minister, also condemned the rise in intimate image abuse, saying: 'The rise of intimate image abuse is a horrifying trend that exploits victims and perpetuates a toxic online culture. 'These acts are not just cowardly, they are deeply damaging, particularly for women and girls who are disproportionately targeted.' The rapid development of AI technology has made it easier than ever for perpetrators to create and distribute explicit images without the knowledge or consent of the victims. With new cases emerging, there are growing calls for stricter legislation worldwide to keep pace with this emerging threat. Tech companies are also under increasing pressure to remove deepfake content from their platforms and take stronger measures to prevent its creation and distribution.

Police in Spain arrest schoolboy for making AI nude images of classmates
Police in Spain arrest schoolboy for making AI nude images of classmates

Yahoo

time3 days ago

  • Yahoo

Police in Spain arrest schoolboy for making AI nude images of classmates

Spanish police said on Sunday they were investigating a 17-year-old on suspicion of using artificial intelligence to deepfake nude images of female classmates for sale. Sixteen young women at an educational institute in Valencia, in southeastern Spain, complained about the AI-generated images of them which were circulating on social media and online. In December, a teenage girl complained to police that AI-generated video and faked photos resembling her "completely naked" were posted on a social media account started under her name. 'Photos of various people, all of them minors, appeared on this account. All these photos had been modified from the originals, which had been manipulated so that the people in them appeared completely naked,' the Spanish Civil Guard said in a statement on Sunday. A 17-year-old boy is under investigation for alleged corruption of minors. The Spanish government said in March said it would put forward a law to treat such deepfaked sexual imagery created by AI without consent as a crime but the bill has so far not been passed by parliament. In September 2023, Spain was shocked when 15 minors in Extremadura, in southwest Spain, were investigated for using AI to produce fake naked images of their female schoolmates. They were later sentenced to a year's probation.

Police in Spain arrest schoolboy for making AI nude images of classmates
Police in Spain arrest schoolboy for making AI nude images of classmates

The Independent

time3 days ago

  • The Independent

Police in Spain arrest schoolboy for making AI nude images of classmates

Spanish police said on Sunday they were investigating a 17-year-old on suspicion of using artificial intelligence to deepfake nude images of female classmates for sale. Sixteen young women at an educational institute in Valencia, in southeastern Spain, complained about the AI-generated images of them which were circulating on social media and online. In December, a teenage girl complained to police that AI-generated video and faked photos resembling her "completely naked" were posted on a social media account started under her name. 'Photos of various people, all of them minors, appeared on this account. All these photos had been modified from the originals, which had been manipulated so that the people in them appeared completely naked,' the Spanish Civil Guard said in a statement on Sunday. A 17-year-old boy is under investigation for alleged corruption of minors. The Spanish government said in March said it would put forward a law to treat such deepfaked sexual imagery created by AI without consent as a crime but the bill has so far not been passed by parliament. In September 2023, Spain was shocked when 15 minors in Extremadura, in southwest Spain, were investigated for using AI to produce fake naked images of their female schoolmates. They were later sentenced to a year's probation.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store