logo
Massive deepfake AI porn site shutting down for good

Massive deepfake AI porn site shutting down for good

Yahoo06-05-2025

Yahoo is using AI to generate takeaways from this article. This means the info may not always match what's in the article. Reporting mistakes helps us improve the experience.
Yahoo is using AI to generate takeaways from this article. This means the info may not always match what's in the article. Reporting mistakes helps us improve the experience.
Yahoo is using AI to generate takeaways from this article. This means the info may not always match what's in the article. Reporting mistakes helps us improve the experience. Generate Key Takeaways
Mr. Deepfakes, a site that provides users with nonconsensual, AI-generated deepfake pornography, has shut down.
The site, founded in 2018, is described as the 'most prominent and mainstream marketplace' for deepfake porn of celebrities and individuals with no public presence, CBS News reports. Deepfake pornography refers to digitally altered images and videos in which a person's face is pasted onto another's body using artificial intelligence.
Now, the site has notified users it has shut down for good.
'A critical service provider has terminated service permanently. Data loss has made it impossible to continue operation,' a notice at the top of the site said, according to 404 Media.
'We will not be relaunching. Any website claiming this is fake,' the notice continued. 'This domain will eventually expire and we are not responsible for future use. This message will be removed around one week.'
Mr. Deepfakes announced it's shutting down for good after years of disseminating nonconsensual, AI-generated deepfake porn (AFP via Getty Images)
The move comes days after Congress passed the 'Take it Down Act,' a bill championed by First Lady Melania Trump. The bill, now set for President Donald Trump's signature, makes it a federal crime to knowingly publish nonconsensual sexual images, including AI-generated deepfakes. All sites must remove the content within 48 hours of notice from a victim.
While many states already had laws banning deepfakes and revenge porn, this marks a rare example of federal intervention on the issue.
Henry Ajder, an expert on AI and deepfakes, told CBS News the site was a 'central node' for nonconsensual deepfake porn.
"I'm sure those communities will find a home somewhere else but it won't be this home and I don't think it'll be as big and as prominent. And I think that's critical," Ajder said.
"We're starting to see people taking it more seriously and we're starting to see the kind of societal infrastructure needed to react better than we have, but we can never be complacent with how much resource and how much vigilance we need to give," he added.
Melania Trump pictured in March speaking during a roundtable discussion about the Take it Down Act. The First Lady championed the bill, which was first introduced by Senators Ted Cruz and Amy Klobuchar (Getty Images)
Republican Senator Ted Cruz and Democratic Senator Amy Klobuchar first introduced the bill. Cruz said he was inspired by the story of Elliston Berry, who was the target of deepfake porn shared on Snapchat when she was 14.
However, some critics have spoken out against the bill, calling it too broad and arguing it could lead to the censorship of government critics, LGBTQ+ content and legal pornography, the Associated Press reports.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Hawley Breaks With Republicans to Oppose a Major Crypto Bill
Hawley Breaks With Republicans to Oppose a Major Crypto Bill

New York Times

time23 minutes ago

  • New York Times

Hawley Breaks With Republicans to Oppose a Major Crypto Bill

While the clash between Elon Musk and President Trump captivated Washington on Thursday, another drama was playing out behind closed doors over a bill to regulate the $250 billion market for stablecoins, which could transform America's relationship with the dollar, upend the credit card industry, and benefit both Musk and Trump. The bill, the GENIUS Act, is poised to pass the Senate within days. But a prominent Republican, Senator Josh Hawley of Missouri, said that he will vote against the bill in its current form, warning that it would hand too much control of America's financial system to tech giants. 'It's a huge giveaway to Big Tech,' Hawley said in an interview. Mr. Hawley, who previously voted against the bill for procedural purposes, is concerned that the legislation would allow tech giants to create digital currencies that compete with the dollar. And he fears that such companies would then be motivated to collect even more data on users' finances. 'It allows these tech companies to issue stablecoins without any kind of controls,' he said. 'I don't see why we would do that.' Similar worries scuttled an effort by Meta to get into stablecoins. In 2019, Jay Powell of the Fed, among others, raised 'serious concerns' about Meta's cryptocurrency initiative, called Libra and then Diem. It abandoned the project in 2022. The GENIUS Act has exposed divisions in both parties. Democrats like Senator Elizabeth Warren of Massachusetts oppose the bill, warning it would make it easier for Trump, whose family announced its own USD1 stablecoin in March, to engage in corrupt practices. Want all of The Times? Subscribe.

Meta platforms showed hundreds of "nudify" deepfake ads, CBS News finds
Meta platforms showed hundreds of "nudify" deepfake ads, CBS News finds

Yahoo

time27 minutes ago

  • Yahoo

Meta platforms showed hundreds of "nudify" deepfake ads, CBS News finds

Meta has removed a number of ads promoting "nudify" apps — AI tools used to create sexually explicit deepfakes using images of real people — after a CBS News investigation found hundreds of such advertisements on its platforms. "We have strict rules against non-consensual intimate imagery; we removed these ads, deleted the Pages responsible for running them and permanently blocked the URLs associated with these apps," a Meta spokesperson told CBS News in an emailed statement. CBS News uncovered dozens of those ads on Meta's Instagram platform, in its "Stories" feature, promoting AI tools that, in many cases, advertised the ability to "upload a photo" and "see anyone naked." Other ads in Instagram's Stories promoted the ability to upload and manipulate videos of real people. One promotional ad even read "how is this filter even allowed?" as text underneath an example of a nude deepfake. One ad promoted its AI product by using highly sexualized, underwear-clad deepfake images of actors Scarlett Johansson and Anne Hathaway. Some of the ads ads' URL links redirect to websites that promote the ability to animate real people's images and get them to perform sex acts. And some of the applications charged users between $20 and $80 to access these "exclusive" and "advance" features. In other cases, an ad's URL redirected users to Apple's app store, where "nudify" apps were available to download. An analysis of the advertisements in Meta's ad library found that there were, at a minimum, hundreds of these ads available across the company's social media platforms, including on Facebook, Instagram, Threads, the Facebook Messenger application and Meta Audience Network — a platform that allows Meta advertisers to reach users on mobile apps and websites that partner with the company. According to Meta's own Ad Library data, many of these ads were specifically targeted at men between the ages of 18 and 65, and were active in the United States, European Union and United Kingdom. A Meta spokesperson told CBS News the spread of this sort of AI-generated content is an ongoing problem and they are facing increasingly sophisticated challenges in trying to combat it. "The people behind these exploitative apps constantly evolve their tactics to evade detection, so we're continuously working to strengthen our enforcement," a Meta spokesperson said. CBS News found that ads for "nudify" deepfake tools were still available on the company's Instagram platform even after Meta had removed those initially flagged. Deepfakes are manipulated images, audio recordings, or videos of real people that have been altered with artificial intelligence to misrepresent someone as saying or doing something that the person did not actually say or do. Last month, President Trump signed into law the bipartisan "Take It Down Act," which, among other things, requires websites and social media companies to remove deepfake content within 48 hours of notice from a victim. Although the law makes it illegal to "knowingly publish" or threaten to publish intimate images without a person's consent, including AI-created deepfakes, it does not target the tools used to create such AI-generated content. Those tools do violate platform safety and moderation rules implemented by both Apple and Meta on their respective platforms. Meta's advertising standards policy says, "ads must not contain adult nudity and sexual activity. This includes nudity, depictions of people in explicit or sexually suggestive positions, or activities that are sexually suggestive." Under Meta's "bullying and harassment" policy, the company also prohibits "derogatory sexualized photoshop or drawings" on its platforms. The company says its regulations are intended to block users from sharing or threatening to share nonconsensual intimate imagery. Apple's guidelines for its app store explicitly state that "content that is offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy" is banned. Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell University's tech research center, has been studying the surge in AI deepfake networks marketing on social platforms for more than a year. He told CBS News in a phone interview on Tuesday that he'd seen thousands more of these ads across Meta platforms, as well as on platforms such as X and Telegram, during that period. Although Telegram and X have what he described as a structural "lawlessness" that allows for this sort of content, he believes Meta's leadership lacks the will to address the issue, despite having content moderators in place. "I do think that trust and safety teams at these companies care. I don't think, frankly, that they care at the very top of the company in Meta's case," he said. "They're clearly under-resourcing the teams that have to fight this stuff, because as sophisticated as these [deepfake] networks are … they don't have Meta money to throw at it." Mantzarlis also said that he found in his research that "nudify" deepfake generators are available to download on both Apple's app store and Google's Play store, expressing frustration with these massive platforms' inability to enforce such content. "The problem with apps is that they have this dual-use front where they present on the app store as a fun way to face swap, but then they are marketing on Meta as their primary purpose being nudification. So when these apps come up for review on the Apple or Google store, they don't necessarily have the wherewithal to ban them," he said. "There needs to be cross-industry cooperation where if the app or the website markets itself as a tool for nudification on any place on the web, then everyone else can be like, 'All right, I don't care what you present yourself as on my platform, you're gone,'" Mantzarlis added. CBS News has reached out to both Apple and Google for comment as to how they moderate their respective platforms. Neither company had responded by the time of writing. Major tech companies' promotion of such apps raises serious questions about both user consent and about online safety for minors. A CBS News analysis of one "nudify" website promoted on Instagram showed that the site did not prompt any form of age verification prior to a user uploading a photo to generate a deepfake image. Such issues are widespread. In December, CBS News' 60 Minutes reported on the lack of age verification on one of the most popular sites using artificial intelligence to generate fake nude photos of real people. Despite visitors being told that they must be 18 or older to use the site, and that "processing of minors is impossible," 60 Minutes was able to immediately gain access to uploading photos once the user clicked "accept" on the age warning prompt, with no other age verification necessary. Data also shows that a high percentage of underage teenagers have interacted with deepfake content. A March 2025 study conducted by the children's protection nonprofit Thorn showed that among teens, 41% said they had heard of the term "deepfake nudes," while 10% reported personally knowing someone who had had deepfake nude imagery created of them. Musk alleges Trump's name appeared in Epstein files as feud escalates What to know about President Trump's travel ban on nationals from 12 countries Trump says he's disappointed by Musk criticism of budget bill, Musk says he got Trump elected

Palantir CEO Warns: AI Race With China Will Have Only One Winner
Palantir CEO Warns: AI Race With China Will Have Only One Winner

Yahoo

time31 minutes ago

  • Yahoo

Palantir CEO Warns: AI Race With China Will Have Only One Winner

June 6 - Palantir Technologies (NASDAQ:PLTR) CEO Alex Karp said the global race in artificial intelligence between the U.S. and China is likely to have just one winner, urging Western governments to move faster and adopt a more entrepreneurial mindset. Warning! GuruFocus has detected 5 Warning Sign with MSFT. Speaking on CNBC Thursday, Karp noted that U.S. corporate leaders are uniquely equipped to drive AI development, citing their adaptability and deep industry roots. He said European allies and others in the West should take cues from this approach, especially as AI becomes a defining geopolitical issue. Shares of Palantir fell 8% to close at $119.9 Thursday. The drop followed Republican concerns over the company's past federal contracts under the Trump administration. Karp dismissed reports that Palantir was involved in unauthorized surveillance of U.S. citizens, calling them unfounded. The company's work with the U.S. government has expanded in recent years, including a $795 million Department of Defense contract and $113 million in other agreements. Despite recent political scrutiny, Palantir stock has jumped more than 430% over the past year, boosted by growing interest in its government and AI-related services. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store