Latest news with #ElectronicFrontierFoundation


TechCrunch
24-05-2025
- Politics
- TechCrunch
Why a new anti-revenge porn law has free speech experts alarmed
Privacy and digital rights advocates are raising alarms over a law that many would expect them to cheer: a federal crackdown on revenge porn and AI-generated deepfakes. The newly signed Take It Down Act makes it illegal to publish nonconsensual explicit images — real or AI-generated — and gives platforms just 48 hours to comply with a victim's takedown request or face liability. While widely praised as a long-overdue win for victims, experts have also warned its vague language, lax standards for verifying claims, and tight compliance window could pave the way for overreach, censorship of legitimate content, and even surveillance. 'Content moderation at scale is widely problematic and always ends up with important and necessary speech being censored,' India McKinney, director of federal affairs at Electronic Frontier Foundation, a digital rights organization, told TechCrunch. Online platforms have one year to establish a process for removing nonconsensual intimate imagery (NCII). While the law requires takedown requests come from victims or their representatives, it only asks for a physical or electronic signature — no photo ID or other form of verification is needed. That likely aims to reduce barriers for victims, but it could create an opportunity for abuse. 'I really want to be wrong about this, but I think there are going to be more requests to take down images depicting queer and trans people in relationships, and even more than that, I think it's gonna be consensual porn,' McKinney said. Senator Marsha Blackburn (R-TN), a co-sponsor of the Take It Down Act, also sponsored the Kids Online Safety Act which puts the onus on platforms to protect children from harmful content online. Blackburn has said she believes content related to transgender people is harmful to kids. Similarly, the Heritage Foundation — the conservative think tank behind Project 2025 — has also said that 'keeping trans content away from children is protecting kids.' Because of the liability that platforms face if they don't take down an image within 48 hours of receiving a request, 'the default is going to be that they just take it down without doing any investigation to see if this actually is NCII or if it's another type of protected speech, or if it's even relevant to the person who's making the request,' said McKinney. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | REGISTER NOW Snapchat and Meta have both said they are supportive of the law, but neither responded to TechCrunch's requests for more information about how they'll verify whether the person requesting a takedown is a victim. Mastodon, a decentralized platform that hosts its own flagship server that others can join, told TechCrunch it would lean towards removal if it was too difficult to verify the victim. Mastodon and other decentralized platforms like Bluesky or Pixelfed may be especially vulnerable to the chilling effect of the 48-hour takedown rule. These networks rely on independently operated servers, often run by nonprofits or individuals. Under the law, the FTC can treat any platform that doesn't 'reasonably comply' with takedown demands as committing an 'unfair or deceptive act or practice' – even if the host isn't a commercial entity. 'This is troubling on its face, but it is particularly so at a moment when the chair of the FTC has taken unprecedented steps to politicize the agency and has explicitly promised to use the power of the agency to punish platforms and services on an ideological, as opposed to principled, basis,' the Cyber Civil Rights Initiative, a nonprofit dedicated to ending revenge porn, said in a statement. Proactive monitoring McKinney predicts that platforms will start moderating content before it's disseminated so they have fewer problematic posts to take down in the future. Platforms are already using AI to monitor for harmful content. Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company works with online platforms to detect deepfakes and child sexual abuse material (CSAM). Some of Hive's customers include Reddit, Giphy, Vevo, Bluesky, and BeReal. 'We were actually one of the tech companies that endorsed that bill,' Guo told TechCrunch. 'It'll help solve some pretty important problems and compel these platforms to adopt solutions more proactively.' Hive's model is a software-as-a-service, so the startup doesn't control how platforms use its product to flag or remove content. But Guo said many clients insert Hive's API at the point of upload to monitor before anything is sent out to the community. A Reddit spokesperson told TechCrunch the platform uses 'sophisticated internal tools, processes, and teams to address and remove' NCII. Reddit also partners with nonprofit SWGfl to deploy its StopNCII tool, which scans live traffic for matches against a database of known NCII and removes accurate matches. The company did not share how it would ensure the person requesting the takedown is the victim. McKinney warns this kind of monitoring could extend into encrypted messages in the future. While the law focuses on public or semi-public dissemination, it also requires platforms to 'remove and make reasonable efforts to prevent the reupload' of nonconsensual intimate images. She argues this could incentivize proactive scanning of all content, even in encrypted spaces. The law doesn't include any carve outs for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage. Meta, Signal, and Apple have not responded to TechCrunch's request for more information on their plans for encrypted messaging. Broader free speech implications On March 4, Trump delivered a joint address to Congress in which he praised the Take It Down Act and said he looked forward to signing it into law. 'And I'm going to use that bill for myself, too, if you don't mind,' he added. 'There's nobody who gets treated worse than I do online.' While the audience laughed at the comment, not everyone took it as a joke. Trump hasn't been shy about suppressing or retaliating against unfavorable speech, whether that's labeling mainstream media outlets 'enemies of the people,' barring The Associated Press from the Oval Office despite a court order, or pulling funding from NPR and PBS. On Thursday, the Trump administration barred Harvard University from accepting foreign student admissions, escalating a conflict that began after Harvard refused to adhere to Trump's demands that it make changes to its curriculum and eliminate DEI-related content, among other things. In retaliation, Trump has frozen federal funding to Harvard and threatened to revoke the university's tax-exempt status. 'At a time when we're already seeing school boards try to ban books and we're seeing certain politicians be very explicitly about the types of content they don't want people to ever see, whether it's critical race theory or abortion information or information about climate change…it is deeply uncomfortable for us with our past work on content moderation to see members of both parties openly advocating for content moderation at this scale,' McKinney said.


Time of India
23-05-2025
- Politics
- Time of India
ET Explainer: What is Trump's Take it Down Act to tackle ‘revenge porn'
US president Donald Trump on Monday signed the Take it Down Act , aimed at tackling non-consensual sexually explicit images, or 'revenge porn'—whether real or AI-generated deepfakes—being published online. This comes as the internet has seen several high-profile cases of non-consensual deepfakes of popular celebrities being circulated online, while social media platforms like X and Meta have rolled back content moderation initiatives in countries like the US. ET's Annapurna Roy explains what the new law does and what it means for these platforms. What does the law say? The law, officially called the Tools to Address Known Exploitation by Immobilising Technological Deepfakes on Websites and Networks Act, makes it a federal crime in the US to knowingly publish intimate images – either authentic or computer-generated – of adults without their consent, as well as of minors. Those who publish such content of minors under the age of 18 can be fined and face up to three years in prison. Where the victims are adults, offenders face up to two years in prison. The Act also imposes penalties on those who threaten to publish such content. Live Events What did Trump say? Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories 'With the rise of AI image generation, countless women have been harassed with deepfakes and other explicit images distributed against their will. This is…wrong… Just so horribly wrong,' Trump said at the signing ceremony. The law will address this 'abusive situation', he said. First lady Melania Trump , who is said to have championed the bill, said AI and social media are addictive for the younger generation and that new technologies can be 'weaponised'. With the law, vulnerable people can be 'better protected from their image or identity being abused through non-consensual intimate imagery,' she said. What does it mean for online platforms? Platforms will have to remove such illegal content within 48 hours after a victim's request. They will also have to make efforts to delete duplicates of this content. Critics, however, have argued that measures such as the takedown provision may be misused. Further, given the short window to take content down, platforms, especially smaller ones, may not be able to verify claims adequately, according to the Electronic Frontier Foundation. Platforms may be forced to weaken encryption to be able to monitor and flag such content better and use flawed technology to crack down on duplicates.

Kuwait Times
21-05-2025
- Politics
- Kuwait Times
Trump signs bill criminalizing ‘revenge porn'
WASHINGTON: US President Donald Trump signed a bill on Monday making it a federal crime to post 'revenge porn' — whether it is real or generated by artificial intelligence. The 'Take It Down Act,' passed with overwhelming bipartisan congressional support, criminalizes non-consensual publication of intimate images, while also mandating their removal from online platforms. 'With the rise of AI image generation, countless women have been harassed with deepfakes and other explicit images distributed against their will,' Trump said at a signing ceremony in the Rose Garden of the White House. 'And today we're making it totally illegal,' the president said. 'Anyone who intentionally distributes explicit images without the subject's consent will face up to three years in prison.' Websites that fail to remove the images promptly, within 48 hours, will face civil liabilities, Trump said. First Lady Melania Trump endorsed the bill in early March and attended the signing ceremony in a rare public White House appearance. The First Lady has largely been an elusive figure at the White House since her husband took the oath of office on January 20, spending only limited time in Washington. In remarks at the signing ceremony, she described the bill as a 'national victory that will help parents and families protect children from online exploitation.' 'This legislation is a powerful step forward in our efforts to ensure that every American, especially young people, can feel better protected from their image or identity being abused,' she said. Deepfakes often rely on artificial intelligence and other tools to create realistic-looking fake videos. They can be used to create falsified pornographic images of real women, which are then published without their consent and proliferate. Some US states, including California and Florida, have laws criminalizing the publication of sexually explicit deepfakes, but critics have voiced concerns the 'Take It Down Act' grants the authorities increased censorship power. The Electronic Frontier Foundation, a nonprofit focused on free expression, has said the bill gives 'the powerful a dangerous new route to manipulate platforms into removing lawful speech that they simply don't like.' The bill would require social media platforms and websites to have procedures in place to swiftly remove non-consensual intimate imagery upon notification from a victim. Harassment, bullying, blackmail An online boom in non-consensual deepfakes is currently outpacing efforts to regulate the technology around the world due to a proliferation of AI tools, including photo apps digitally undressing women. While high-profile politicians and celebrities, including singer Taylor Swift and Democratic congresswoman Alexandria Ocasio-Cortez, have been victims of deepfake porn, experts say women not in the public eye are equally vulnerable. A wave of AI porn scandals have been reported at schools across US states with hundreds of teenagers targeted by their own classmates. Such non-consensual imagery can lead to harassment, bullying or blackmail, sometimes causing devastating mental health consequences, experts warn. Renee Cummings, an AI and data ethicist and criminologist at the University of Virginia, said the bill is a 'significant step' in addressing the exploitation of AI-generated deepfakes and non-consensual imagery. 'Its effectiveness will depend on swift and sure enforcement, severe punishment for perpetrators and real-time adaptability to emerging digital threats,' Cummings told AFP. At least one mother hailed the new legislation as a step in the right direction. 'It's a very important first step,' Dorota Mani told AFP on Monday, calling it at 'very powerful bill.' As the mother of a young victim, Mani said she felt empowered because 'now I have a legal weapon in my hand, which nobody can say no to.' — AFP

Kuwait Times
20-05-2025
- Politics
- Kuwait Times
Trump signs bill criminalizing 'revenge porn'
WASHINGTON: US President Donald Trump signed a bill on Monday making it a federal crime to post "revenge porn" — whether it is real or generated by artificial intelligence. The "Take It Down Act," passed with overwhelming bipartisan congressional support, criminalizes non-consensual publication of intimate images, while also mandating their removal from online platforms. "With the rise of AI image generation, countless women have been harassed with deepfakes and other explicit images distributed against their will," Trump said at a signing ceremony in the Rose Garden of the White House. "And today we're making it totally illegal," the president said. "Anyone who intentionally distributes explicit images without the subject's consent will face up to three years in prison." Websites that fail to remove the images promptly, within 48 hours, will face civil liabilities, Trump said. First Lady Melania Trump endorsed the bill in early March and attended the signing ceremony in a rare public White House appearance. The First Lady has largely been an elusive figure at the White House since her husband took the oath of office on January 20, spending only limited time in Washington. In remarks at the signing ceremony, she described the bill as a "national victory that will help parents and families protect children from online exploitation." "This legislation is a powerful step forward in our efforts to ensure that every American, especially young people, can feel better protected from their image or identity being abused," she said. Deepfakes often rely on artificial intelligence and other tools to create realistic-looking fake videos. They can be used to create falsified pornographic images of real women, which are then published without their consent and proliferate. Some US states, including California and Florida, have laws criminalizing the publication of sexually explicit deepfakes, but critics have voiced concerns the "Take It Down Act" grants the authorities increased censorship power. The Electronic Frontier Foundation, a nonprofit focused on free expression, has said the bill gives "the powerful a dangerous new route to manipulate platforms into removing lawful speech that they simply don't like." The bill would require social media platforms and websites to have procedures in place to swiftly remove non-consensual intimate imagery upon notification from a victim. Harassment, bullying, blackmail An online boom in non-consensual deepfakes is currently outpacing efforts to regulate the technology around the world due to a proliferation of AI tools, including photo apps digitally undressing women. While high-profile politicians and celebrities, including singer Taylor Swift and Democratic congresswoman Alexandria Ocasio-Cortez, have been victims of deepfake porn, experts say women not in the public eye are equally vulnerable. A wave of AI porn scandals have been reported at schools across US states with hundreds of teenagers targeted by their own classmates. Such non-consensual imagery can lead to harassment, bullying or blackmail, sometimes causing devastating mental health consequences, experts warn. Renee Cummings, an AI and data ethicist and criminologist at the University of Virginia, said the bill is a "significant step" in addressing the exploitation of AI-generated deepfakes and non-consensual imagery. "Its effectiveness will depend on swift and sure enforcement, severe punishment for perpetrators and real-time adaptability to emerging digital threats," Cummings told AFP. At least one mother hailed the new legislation as a step in the right direction. "It's a very important first step," Dorota Mani told AFP on Monday, calling it at "very powerful bill." As the mother of a young victim, Mani said she felt empowered because "now I have a legal weapon in my hand, which nobody can say no to." — AFP


USA Today
20-05-2025
- USA Today
Facial recognition at TSA: What to know before your next airport screening
Facial recognition at TSA: What to know before your next airport screening Show Caption Hide Caption What you need to know about airport security rules and checkpoints Here are TSA rules that you need to know and what to expect at each airport checkpoint. TSA is rolling out facial recognition technology at airport security checkpoints nationwide. The technology aims to streamline identity verification and improve security. Privacy concerns exist regarding data storage and potential misuse. Travelers can opt out of facial recognition for an alternative screening process. The growing use of facial recognition technology at airport security checkpoints is making some travelers worry about their digital privacy. During the screening process at Transportation Security Administration checkpoints across 84 airports nationwide, air passengers will encounter the second-generation Credential Authentication Technology (CAT), according to the agency's website. The technology is expected to roll out to over 400 federalized airports. This biometric technology, in which a traveler's photo is taken while the officer scans their ID, is meant to streamline the process of verifying that you match your documents, flight status and vetting status. It also assesses digital IDs, if a traveler has one. What travelers should know: Do I have to give border control my phone? "This latest technology helps ensure that we know who is boarding flights," said TSA's Federal Security Director for Pennsylvania and Delaware Gerardo Spero in a news release last month. "Credential authentication plays an important role in passenger identity verification. It improves a TSA officer's ability to validate a traveler's photo identification while also identifying any inconsistencies associated with fraudulent travel documents." However, there are rising concerns around the safety of biometric information storage, stemming from the lack of transparency around the database where the information is being stored. "It's not about the integrity of your face or driver's license, it's about the database where you have no control," said India McKinney, director of federal affairs at the Electronic Frontier Foundation. There's the risk of misidentification, security breaches, plus human or technological error. The screening process also varies at different airports and even terminals, putting the burden on the traveler. "We are aware of a variety of public concerns related to the accuracy of facial recognition and other biometric technologies and take those concerns seriously," the agency told USA TODAY in an email statement. Here's what travelers should know about TSA's facial recognition technology. What happens to my data after the security checkpoint screening? According to the TSA, your information is generally deleted shortly after you pass the screening process and is not used for surveillance purposes. If you opted into the TSA PreCheck Touchless Identity Solution, your information will be deleted 24 hours after your flight's scheduled departure time. "TSA is committed to protecting passenger privacy," an agency spokesperson said. "Under normal operating conditions TSA facial recognition technology deletes traveler data and images immediately after your identity is verified." However, the agency added that the TSA will temporarily keep photos and data "in rare instances" to test the accuracy of the biometric technology. If this is going to happen, the agency will notify passengers with signs, and it's only for a limited time. Travelers can decline without losing their place in line. The agency said it secures all personal data and images, and adheres to DHS and TSA cybersecurity requirements. Nevertheless, all systems, including facial recognition technology, are susceptible to being compromised. "No cyber system is 100% secure, even if the images aren't used for a long period of time," said Vahid Behzadan, assistant professor in computer science and data science at the University of New Haven. "The fact that they're being imposed on a large group of travelers presents a vulnerability ... if an adversary manages to compromise the end points, then the adversary has access to all the facial images and details, assuming the IDs are also scanned." Can I decline TSA taking my photo? Yes, you can opt out of facial recognition technology and receive an alternative ID credential check from the officer instead. "There is no issue and no delay with a traveler exercising their rights to not participate in the automated biometrics matching technology," TSA states on its website.