logo
#

Latest news with #TakeItDown

Kids are making deepfakes of each other, and laws aren't keeping up
Kids are making deepfakes of each other, and laws aren't keeping up

Yahoo

time07-07-2025

  • Yahoo

Kids are making deepfakes of each other, and laws aren't keeping up

Last October, a 13-year-old boy in Wisconsin used a picture of his classmate celebrating her bat mitzvah to create a deepfake nude he then shared on Snapchat. This is not an isolated incident. Over the past few years, there has been case after case of school-age children using deepfakes to prank or bully their classmates. And it keeps getting easier to do. When they emerged online eight years ago, deepfakes were initially difficult to make. Nowadays, advances in technology, through generative artificial intelligence, have provided tools to the masses. One troubling consequence is the prevalence of deepfake apps among young users. 'If we would have talked five or six years ago about revenge porn in general, I don't think that you would have found so many offenders were minors,' said Rebecca Delfino, a law professor at Loyola Marymount University who studies deepfakes. Federal and state legislators have sought to tackle the scourge of nonconsensual intimate image (NCII) abuse, sometimes referred to as 'revenge porn,' though advocates prefer the former term. Laws criminalizing the nonconsensual distribution of intimate images — for authentic images, at least — are in effect in every U.S. state and Washington, D.C., and last month President Donald Trump signed a similar measure into law, known as Take It Down. But unlike the federal measure, many of the state laws don't apply to explicit AI-generated deepfakes. Fewer still appear to directly grapple with the fact that perpetrators of deepfake abuse are often minors. Fifteen percent of students reported knowing about AI-generated explicit images of a classmate, according to a survey released in September by the Center for Democracy and Technology (CDT), a center-left think tank. Students also reported that girls were much more likely to be depicted in explicit deepfakes. According to CDT, the findings show that 'NCII, both authentic and deepfake, is a significant issue in K-12 public schools.' 'The conduct we see minors engaged in is not all that different from the pattern of cruelty, humiliation and exploitation and bullying that young people have always done to each other,' said Delfino. 'The difference lies in not only the use of technology to carry out some of that behavior, but the ease with which it is disseminated.' Policymakers at the state and federal level have come at perpetrators of image-based sexual abuse 'hard and fast,' no matter their age, Delfino said. The reason is clear, she said: The distribution of nonconsensual images can have long-lasting, serious mental health harms on the target of abuse. Victims can be forced to withdraw from life online because of the prevalence of nonconsensual imagery. Image-based sexual abuse has similar negative mental health impacts on survivors as those who experienced offiline sexual violence. Delfino said that under most existing laws, youth offenders are likely to be treated similarly to minors who commit other crimes: They can be charged, but prosecutors and courts would likely take into account their age in doling out punishment. Yet while some states have developed penal codes that factor a perpetrator's age into their punishment, including by imposing tiered penalties that attempt to spare first-time or youth offenders from incarceration, most do not. While most agree there should be consequences for youth offenders, there's less consensus about what those consequences should be — and a push for reeducation over extreme charges.. A 2017 survey by the Cyber Civil Rights Initiative (CCRI), a nonprofit that combats online abuse, found that people who committed image-based sexual abuse reported the threat of jail time as one of the strongest deterrents against the crime. That's why the organization's policy recommendations have always pushed for criminalization, said Mary Anne Franks, a law professor at George Washington University who leads the initiative. Many states have sought to address the issue of AI-generated child sexual abuse material, which covers deepfakes of people under 18, by modifying existing laws banning what is legally know as child pornography. These laws tend to have more severe punishments: felonies instead of misdemeanors, high minimum jail time or significant fines. For example, Louisiana mandates a minimum five-year jail sentence no matter the age of the perpetrator. While incidents of peer-on-peer deepfake abuse are increasingly cropping up in the news, information on what criminal consequences youth offenders have faced remains scarce. There is often a significant amount of discretion involved in how minors are charged. Generally, juvenile justice falls under state rather than federal law, giving local officials leeway to impose punishments as they see fit. If local prosecutors are forced to decide between charging minors with severe penalties that are aimed at adults or declining to prosecute, most will likely choose the latter, said Lindsay Hawthorne, the communications coordinator at Enough Abuse, a Massachussetts-based nonprofit fighting against child sexual abuse. But then this throws away an opportunity to teach youth about the consequences of their actions and prevent reoffending. Charges that come at a prosecutor's discretion are more likely to disproportionately criminalize youth of color and LGBTQ+ youth, she said. Delfino said that in an ideal case, a judge in juvenile court would weigh many factors in sentencing: the severity of the harm caused by deepfake abuse, the intent of the perpetrator, and adolescent psychology. Experts say that building these factors directly into policy can help better deal with offenders who may not understand the consequences of their actions and allow for different enforcement mechanisms for people who say they weren't seeking to cause harm. For example, recent laws passed this session in South Carolina and Florida have 'proportional penalties' that take into account circumstances including age, intent and prior criminal history. Both laws mirrored model legislation written by MyOwn Image, a nonprofit dedicated to preventing technology-facilitated sexual violence. Founded by image-based sexual abuse survivor Susanna Gibson, the organization has been involved in advocating for strengthened laws banning nonconsensual distribution of intimate images at the state level, bringing a criminal justice reform lens into the debate. Under the Florida law, which took effect May 22, offenders who profit from nonconsensual intimate images distribution are charged with felonies, even if for a first offense. But first-time offenders who use intimate images to harass victims are charged with a misdemeanor; if they do it again, they then are charged with a felony. This avoids 'sweeping criminalization of people who may not fully understand the harm caused by their actions,' Will Rivera, managing director at MyOwn Image, said in a statement. South Carolina's newly passed law addressing AI-generated child sexual abuse material, meanwhile, explicitly states that minors with no prior related criminal record should be referred to family court, and recommends behavioral health counseling as part of the adjudication. A separate South Carolina law banning nonconsensual distribution of intimate imagery also has tiered charges depending on intent and previous convictions. Experts are mostly united in believing that incarcerating youth offenders would not solve the problem of image-based sexual abuse. Franks said that while her group has long recommended criminal penalties as part of the answer, there need to be more policy solutions for youth offenders than just threatening jail time. Amina Fazlullah, head of tech policy advocacy at Common Sense Media, said that laws criminalizing NCII and abusive deepfakes need to be accompanied by digital literacy and AI education measures. That could fill a massive gap. According to Stanford, there currently isn't any comprehensive research on how many schools specifically teach students about online exploitation. Since most teens aren't keeping abreast of criminal codes, AI literacy education initiatives could teach young users what crosses the line into illegal behavior and provide resources for victims of nonconsensual intimate imagery to seek redress. Digital literacy could also emphasize ethical use of technology and create space for conversations about app use. Hawthorne noted that Massachusetts's law banning deepfakes, which went into effect last year, directs adolescents who violate it to take part in an education program that explains laws and the impacts of sexting. Ultimately, Franks said, the behavior that underlies deepfake abuse isn't new, and so we do not need to rewrite our responses from scratch 'We should just stick to the things that we know, which don't change with technology, which is consent, autonomy, agency, safety. Those are all things that should be at the heart of what we talk to kids about,' she said. Like abstinence-only education, schools shaming and scaring kids about more common practices like sexting is not an effective way to prevent abuse, Franks said, and can discourage kids from seeking help from adults when they are being exploited. Franks noted that parents, too, have the power to instill in their children agency over their own images every time they take a photo. She also said there are myriad other ways to regulate the ecosystem around sexually explicit deepfakes. After all, most policy around deepfakes addresses harm already done, and laws like the federal Take It Down Act put a burden on the victim to request the removal of their images from online platforms. Part of addressing the problem is making it more difficult to create and rapidly distribute nonconsensual imagery — and keeping tools for deepfakes out of kids' hands, experts said. One avenue for change that advocates see is applying pressure on companies whose tools are used to create nonconsensual deepfakes. Third parties that help distribute them are also becoming a target. After a CBS News investigation, Meta took action to remove advertisements of so-called 'nudify apps' on its platforms. Frank also suggested app stores could delist them. Payment processors, too, have a lot of power over the ecosystem. When Visa, Mastercard and Discover cut off payments to PornHub after a damning New York Times report revealed how many nonconsensual videos it hosted, the largest pornography site in the world deleted everything it couldn't confirm was above board — nearly 80 percent of its total content. Last month, Civitai finally cracked down on generative AI models tailored around real people after payment processors refused to work with the company. This followed extensive reporting by tech news site 404 Media on the image-platform's role in the spread of nonconsensual deepfakes. And of course, Franks said, revamping the liability protections digital services enjoy under Section 230 could force tech companies' hands when it comes to liability, compelling them be more proactive about preventing digital sexual violence. A version of this article first appeared in Tech Policy Press. The post Kids are making deepfakes of each other, and laws aren't keeping up appeared first on The 19th. News that represents you, in your inbox every weekday. Subscribe to our free, daily newsletter.

Kids are making deepfakes of each other, and laws aren't keeping up
Kids are making deepfakes of each other, and laws aren't keeping up

Yahoo

time06-07-2025

  • Yahoo

Kids are making deepfakes of each other, and laws aren't keeping up

Last October, a 13-year-old boy in Wisconsin used a picture of his classmate celebrating her bat mitzvah to create a deepfake nude he then shared on Snapchat. This is not an isolated incident. Over the past few years, there has been case after case of school-age children using deepfakes to prank or bully their classmates. And it keeps getting easier to do. When they emerged online eight years ago, deepfakes were initially difficult to make. Nowadays, advances in technology, through generative artificial intelligence, have provided tools to the masses. One troubling consequence is the prevalence of deepfake apps among young users. 'If we would have talked five or six years ago about revenge porn in general, I don't think that you would have found so many offenders were minors,' said Rebecca Delfino, a law professor at Loyola Marymount University who studies deepfakes. Federal and state legislators have sought to tackle the scourge of nonconsensual intimate image (NCII) abuse, sometimes referred to as 'revenge porn,' though advocates prefer the former term. Laws criminalizing the nonconsensual distribution of intimate images — for authentic images, at least — are in effect in every U.S. state and Washington, D.C., and last month President Donald Trump signed a similar measure into law, known as Take It Down. But unlike the federal measure, many of the state laws don't apply to explicit AI-generated deepfakes. Fewer still appear to directly grapple with the fact that perpetrators of deepfake abuse are often minors. Fifteen percent of students reported knowing about AI-generated explicit images of a classmate, according to a survey released in September by the Center for Democracy and Technology (CDT), a center-left think tank. Students also reported that girls were much more likely to be depicted in explicit deepfakes. According to CDT, the findings show that 'NCII, both authentic and deepfake, is a significant issue in K-12 public schools.' 'The conduct we see minors engaged in is not all that different from the pattern of cruelty, humiliation and exploitation and bullying that young people have always done to each other,' said Delfino. 'The difference lies in not only the use of technology to carry out some of that behavior, but the ease with which it is disseminated.' Policymakers at the state and federal level have come at perpetrators of image-based sexual abuse 'hard and fast,' no matter their age, Delfino said. The reason is clear, she said: The distribution of nonconsensual images can have long-lasting, serious mental health harms on the target of abuse. Victims can be forced to withdraw from life online because of the prevalence of nonconsensual imagery. Image-based sexual abuse has similar negative mental health impacts on survivors as those who experienced offiline sexual violence. Delfino said that under most existing laws, youth offenders are likely to be treated similarly to minors who commit other crimes: They can be charged, but prosecutors and courts would likely take into account their age in doling out punishment. Yet while some states have developed penal codes that factor a perpetrator's age into their punishment, including by imposing tiered penalties that attempt to spare first-time or youth offenders from incarceration, most do not. While most agree there should be consequences for youth offenders, there's less consensus about what those consequences should be — and a push for reeducation over extreme charges.. A 2017 survey by the Cyber Civil Rights Initiative (CCRI), a nonprofit that combats online abuse, found that people who committed image-based sexual abuse reported the threat of jail time as one of the strongest deterrents against the crime. That's why the organization's policy recommendations have always pushed for criminalization, said Mary Anne Franks, a law professor at George Washington University who leads the initiative. Many states have sought to address the issue of AI-generated child sexual abuse material, which covers deepfakes of people under 18, by modifying existing laws banning what is legally know as child pornography. These laws tend to have more severe punishments: felonies instead of misdemeanors, high minimum jail time or significant fines. For example, Louisiana mandates a minimum five-year jail sentence no matter the age of the perpetrator. While incidents of peer-on-peer deepfake abuse are increasingly cropping up in the news, information on what criminal consequences youth offenders have faced remains scarce. There is often a significant amount of discretion involved in how minors are charged. Generally, juvenile justice falls under state rather than federal law, giving local officials leeway to impose punishments as they see fit. If local prosecutors are forced to decide between charging minors with severe penalties that are aimed at adults or declining to prosecute, most will likely choose the latter, said Lindsay Hawthorne, the communications coordinator at Enough Abuse, a Massachussetts-based nonprofit fighting against child sexual abuse. But then this throws away an opportunity to teach youth about the consequences of their actions and prevent reoffending. Charges that come at a prosecutor's discretion are more likely to disproportionately criminalize youth of color and LGBTQ+ youth, she said. Delfino said that in an ideal case, a judge in juvenile court would weigh many factors in sentencing: the severity of the harm caused by deepfake abuse, the intent of the perpetrator, and adolescent psychology. Experts say that building these factors directly into policy can help better deal with offenders who may not understand the consequences of their actions and allow for different enforcement mechanisms for people who say they weren't seeking to cause harm. For example, recent laws passed this session in South Carolina and Florida have 'proportional penalties' that take into account circumstances including age, intent and prior criminal history. Both laws mirrored model legislation written by MyOwn Image, a nonprofit dedicated to preventing technology-facilitated sexual violence. Founded by image-based sexual abuse survivor Susanna Gibson, the organization has been involved in advocating for strengthened laws banning nonconsensual distribution of intimate images at the state level, bringing a criminal justice reform lens into the debate. Under the Florida law, which took effect May 22, offenders who profit from nonconsensual intimate images distribution are charged with felonies, even if for a first offense. But first-time offenders who use intimate images to harass victims are charged with a misdemeanor; if they do it again, they then are charged with a felony. This avoids 'sweeping criminalization of people who may not fully understand the harm caused by their actions,' Will Rivera, managing director at MyOwn Image, said in a statement. South Carolina's newly passed law addressing AI-generated child sexual abuse material, meanwhile, explicitly states that minors with no prior related criminal record should be referred to family court, and recommends behavioral health counseling as part of the adjudication. A separate South Carolina law banning nonconsensual distribution of intimate imagery also has tiered charges depending on intent and previous convictions. Experts are mostly united in believing that incarcerating youth offenders would not solve the problem of image-based sexual abuse. Franks said that while her group has long recommended criminal penalties as part of the answer, there need to be more policy solutions for youth offenders than just threatening jail time. Amina Fazlullah, head of tech policy advocacy at Common Sense Media, said that laws criminalizing NCII and abusive deepfakes need to be accompanied by digital literacy and AI education measures. That could fill a massive gap. According to Stanford, there currently isn't any comprehensive research on how many schools specifically teach students about online exploitation. Since most teens aren't keeping abreast of criminal codes, AI literacy education initiatives could teach young users what crosses the line into illegal behavior and provide resources for victims of nonconsensual intimate imagery to seek redress. Digital literacy could also emphasize ethical use of technology and create space for conversations about app use. Hawthorne noted that Massachusetts's law banning deepfakes, which went into effect last year, directs adolescents who violate it to take part in an education program that explains laws and the impacts of sexting. Ultimately, Franks said, the behavior that underlies deepfake abuse isn't new, and so we do not need to rewrite our responses from scratch 'We should just stick to the things that we know, which don't change with technology, which is consent, autonomy, agency, safety. Those are all things that should be at the heart of what we talk to kids about,' she said. Like abstinence-only education, schools shaming and scaring kids about more common practices like sexting is not an effective way to prevent abuse, Franks said, and can discourage kids from seeking help from adults when they are being exploited. Franks noted that parents, too, have the power to instill in their children agency over their own images every time they take a photo. She also said there are myriad other ways to regulate the ecosystem around sexually explicit deepfakes. After all, most policy around deepfakes addresses harm already done, and laws like the federal Take It Down Act put a burden on the victim to request the removal of their images from online platforms. Part of addressing the problem is making it more difficult to create and rapidly distribute nonconsensual imagery — and keeping tools for deepfakes out of kids' hands, experts said. One avenue for change that advocates see is applying pressure on companies whose tools are used to create nonconsensual deepfakes. Third parties that help distribute them are also becoming a target. After a CBS News investigation, Meta took action to remove advertisements of so-called 'nudify apps' on its platforms. Frank also suggested app stores could delist them. Payment processors, too, have a lot of power over the ecosystem. When Visa, Mastercard and Discover cut off payments to PornHub after a damning New York Times report revealed how many nonconsensual videos it hosted, the largest pornography site in the world deleted everything it couldn't confirm was above board — nearly 80 percent of its total content. Last month, Civitai finally cracked down on generative AI models tailored around real people after payment processors refused to work with the company. This followed extensive reporting by tech news site 404 Media on the image-platform's role in the spread of nonconsensual deepfakes. And of course, Franks said, revamping the liability protections digital services enjoy under Section 230 could force tech companies' hands when it comes to liability, compelling them be more proactive about preventing digital sexual violence. A version of this article first appeared in Tech Policy Press. The post Kids are making deepfakes of each other, and laws aren't keeping up appeared first on The 19th. News that represents you, in your inbox every weekday. Subscribe to our free, daily newsletter.

Perspective: The Trojan horse inside the big, beautiful bill
Perspective: The Trojan horse inside the big, beautiful bill

Yahoo

time28-05-2025

  • Health
  • Yahoo

Perspective: The Trojan horse inside the big, beautiful bill

Tucked away in President Donald Trump's 'big, beautiful bill,' which recently passed the House of Representatives, is an unforeseen Trojan horse. It is not even clear if Trump is aware of this last-minute insertion by House Republicans, but one can only hope he is made aware and goes on record as opposing it. It goes completely against his efforts to help make the internet a safer place for everyone, especially our children. Buried in Section 43201(c) of the 1,118 page budget bill is one little sentence: 'Subsection (c) states that no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.' What incredible mischief that one little sentence produces. For example, Utah has been at the forefront of policy efforts to shield children from the worst effects of our internet-centered culture. It was one of the very first states to insist that online porn sites implement age verification measures, and this year Utah required that app stores must also verify users' age. Utah also legislated protections for its citizens when they interact with so-called 'mental health chatbots.' Utah's stance can only be applauded — but the clause in the federal budget bill would imperil the implementation of all of the legal protections put in place by the state. Utah would be prohibited by the federal government from enforcing any of its AI-related laws if this sentence were to be preserved in the reconciliation version of the bill. As co-editor and contributor to "The Oxford Handbook on AI Governance," I condemn this mischief in the strongest possible terms. This is not simply irresponsible, but actually prevents state governments from acting responsibly to safeguard its citizens from the increasingly visible downsides of unfettered AI deployment. In that sense, the insertion of this one sentence is patently malicious. It is clear that our tech overlords would like nothing better than to be unfettered in pursuit of their objectives for AI, regardless of any harm that might come to Americans from that pursuit. As one commentary by J.B. Branch and Ilana Beller put it, 'The provision is a message to the states: Sit down and shut up while Big Tech writes its own rules or continues to lobby Congress to have none at all.' And that failure of Congress to write any rules at all, except the Take It Down bill championed by Trump and his wife, Melania, is equally unconscionable. States have been forced into action by Congress' stubborn inaction. And states have rightly stepped into that vacuum. In addition to Utah, there are numerous states that have tried to hold AI-promoting companies responsible for the behavior of their products. Various states have laws against AI-generated deception that might influence elections; other states require citizens be notified when a decision about, say, mortgage approval, has been made by AI and that they be provided recourse to contest such a decision. Some states prohibit AI voice and image cloning of citizens; other states prohibit medical insurance denials by AI; numerous states have legislation providing standards for driving by autonomous vehicles, and so on and so forth. Which of these would anyone wish to put under federal moratorium? The rejoinder, of course, is that AI companies feel hamstrung by the patchwork of state laws that currently exists. Tough, I say. Congress must finally step up to the plate, rather than forfeiting the game. Maybe tech companies' bellyaching about that patchwork is all that will spur Congress to get the job done. Given Congress is incapable of delivering more than a lowest common denominator on AI, even if Congress acts, states will have to be the ones to raise the bar that Congress won't. And that's good. State-level democracy has shown itself to be much more vibrant than what currently exists at the federal level. Let the states be the laboratory where the best practices in AI regulation can be honed and then copied nationwide. AI absolutely needs guardrails; if Congress is currently impotent, let the states do the job that the federal government seemingly cannot. The 10th Amendment indicates states are well within their right to do so. And make no mistake: those guardrails are needed more than ever as AI capabilities increase. There are new AI-based threats coming into focus for which we dare not wait another decade to tackle. The family of Sewell Setzer III, a 14-year-old boy who shot himself after an AI chatbot encouraged him to do so, has just won a major court victory against The company protested it should be protected from liability on a First Amendment claim, a claim a federal judge has now dismissed, enabling the family's wrongful death suit against the company to move forward. One of the directors of the AI Now Institute summed up the situation well: 'The proposal for a sweeping moratorium on state AI-related legislation really flies in the face of common sense. We can't be treating the industry's worst players with kid gloves while leaving everyday people, workers and children exposed to egregious forms of harm.' Amen to that. All senators and congresspersons, regardless of their partisan affiliation, should work together to strip this one sentence from the reconciliation budget bill. Contact yours right away.

Putin shocks Trump, says 'We like Melania better'; US President then shares his hilarious response
Putin shocks Trump, says 'We like Melania better'; US President then shares his hilarious response

Economic Times

time20-05-2025

  • Politics
  • Economic Times

Putin shocks Trump, says 'We like Melania better'; US President then shares his hilarious response

Reuters Trump told the crowd, "Putin just said, 'they respect your wife a lot.'" US President Donald Trump let his wife Melania co-sign legislation Monday to outlaw artificial intelligence-generated porn with real people's faces. This came moments after claiming that Russian president Vladimir Putin had gushed about the first lady during a high-stakes phone call about the Ukraine gave an insight into the discussion between himself and Russian leader Vladimir Putin during their two-hour call today. While the focus was on peace negotiations for the war in Ukraine, and Russian-US trade, Trump said Putin had also brought up First Lady at a White House Rose Garden signing ceremony for the "Take It Down" bill, Trump told the crowd, "Putin just said, 'they respect your wife a lot.'" ALSO READ: 'I'll shove it up their...': Trump says nobody gonna mention that he got Olympics and Fifa World Cup When Trump responded, "what about me?" he said that Putin had responded, "They like Melania better." 'Putin just said, they [Russians] respect your wife a lot,' said Trump before signing the Take It Down Act during a Rose Garden ceremony. 'I said, 'What about me?' They like Melania better.' Trump's light-hearted story added a personal touch to his interactions with Putin. He also mentioned spending two and a half hours talking to Putin about serious issues. Trump noted that progress was made in their discussions. Trump addressed the ongoing conflict, noting that 5,000 young people are dying each week. He expressed hope that the discussions underway would lead to meaningful improvements. Trump also engaged with leaders from European nations in an effort to tackle the crisis collaboratively. ALSO READ: White House study raises alarm: 9 million Americans could lose health insurance in 'major' recession if... He took a moment to thank First Lady Melania Trump for her leadership on critical issues, praising her dedication and compassion. He highlighted her efforts in securing $25 million to provide housing and support for youth aging out of foster care. Trump acknowledged the presence and support of several senators and congressmen at the event, expressing gratitude for their commitment to addressing these urgent matters. Their participation underscored the shared dedication to achieving meaningful in attendance was Linda Yaccarino, CEO of X. Trump commended her leadership and asked her to stand for recognition, noting that her involvement reflected the importance of public-private collaboration in reaching shared goals.A key highlight of the event was Trump's signing of the Take It Down Act into law. The legislation targets the non-consensual distribution of explicit images, including deepfakes—AI-generated content often used to harass and exploit individuals. Trump stressed the urgent need for this law, emphasizing the harm caused by such abuses and the importance of accountability. The act aims to prevent further harm and protect individuals from technological misuse. ALSO READ: Did Beyoncé take $10 million to appear at Kamala Harris' rally? Trump makes big claim The 'Take It Down Act' criminalizes the distribution of non-consensual intimate imagery — including 'revenge porn' that features real images and artificial intelligence-generated photos and Trump, 78, signed the legislation, he passed the bill and a pen to his wife and asked for her signature in a reflection of her advocacy. 'She deserves to sign it,' the president said, adding that 'America is blessed to have such a dedicated and compassionate first lady.'Melania Trump is believed to be the first lady to sign a piece of legislation alongside a sitting president. 'This legislation is a powerful step forward in our efforts to ensure that every American, especially young people, can feel better protected from their image or identity being abused,' said Melania Trump, 55.'Artificial intelligence and social media are the digital candy for the next generation — sweet, addictive and engineered to have an impact on the connectivity development of our children,' she added. 'But unlike sugar, these new technologies can be weaponized, shape beliefs and sadly, affect emotions.' ALSO READ: FBI chief Kash Patel, accused of spending more time at nightclubs than office, makes big announcement 'We've all heard about deep fakes. I have them all the time, but nobody does anything,' said the president during his remarks. 'I ask [Attorney General] Pam [Bondi], 'Can you help me, Pam?' She says, 'No, I'm too busy, too busy doing other things, don't worry, you'll survive.' But a lot of people don't survive.'Ahead of Monday's bill signing, 49 states had laws barring 'revenge porn' on the books, with South Carolina the lone exception. The new federal law passed the House 409-2 and the Senate by unanimous consent, becoming one of the first pieces of legislation criminalizing aspects of face up to two years in federal prison if the images feature an adult or three years if they depict a minor. Websites must remove content within 48 hours of notification that the images violated the law, with enforcement delegated to the Federal Trade Commission.

Putin shocks Trump, says 'We like Melania better'; US President then shares his hilarious response
Putin shocks Trump, says 'We like Melania better'; US President then shares his hilarious response

Time of India

time20-05-2025

  • Politics
  • Time of India

Putin shocks Trump, says 'We like Melania better'; US President then shares his hilarious response

US President Donald Trump let his wife Melania co-sign legislation Monday to outlaw artificial intelligence-generated porn with real people's faces. This came moments after claiming that Russian president Vladimir Putin had gushed about the first lady during a high-stakes phone call about the Ukraine war. Trump gave an insight into the discussion between himself and Russian leader Vladimir Putin during their two-hour call today. While the focus was on peace negotiations for the war in Ukraine, and Russian-US trade, Trump said Putin had also brought up First Lady Melania. Speaking at a White House Rose Garden signing ceremony for the "Take It Down" bill, Trump told the crowd, "Putin just said, 'they respect your wife a lot.'" by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Semua yang Perlu Anda Ketahui Tentang Limfoma Limfoma Pelajari Undo ALSO READ: 'I'll shove it up their...': Trump says nobody gonna mention that he got Olympics and Fifa World Cup When Trump responded, "what about me?" he said that Putin had responded, "They like Melania better." Live Events 'Putin just said, they [Russians] respect your wife a lot,' said Trump before signing the Take It Down Act during a Rose Garden ceremony. 'I said, 'What about me?' They like Melania better.' Trump thanks Melania Trump's light-hearted story added a personal touch to his interactions with Putin. He also mentioned spending two and a half hours talking to Putin about serious issues. Trump noted that progress was made in their discussions. Trump addressed the ongoing conflict, noting that 5,000 young people are dying each week. He expressed hope that the discussions underway would lead to meaningful improvements. Trump also engaged with leaders from European nations in an effort to tackle the crisis collaboratively. ALSO READ: White House study raises alarm: 9 million Americans could lose health insurance in 'major' recession if... He took a moment to thank First Lady Melania Trump for her leadership on critical issues, praising her dedication and compassion. He highlighted her efforts in securing $25 million to provide housing and support for youth aging out of foster care. Trump acknowledged the presence and support of several senators and congressmen at the event, expressing gratitude for their commitment to addressing these urgent matters. Their participation underscored the shared dedication to achieving meaningful change. Also in attendance was Linda Yaccarino, CEO of X. Trump commended her leadership and asked her to stand for recognition, noting that her involvement reflected the importance of public-private collaboration in reaching shared goals. A key highlight of the event was Trump's signing of the Take It Down Act into law. The legislation targets the non-consensual distribution of explicit images, including deepfakes—AI-generated content often used to harass and exploit individuals. Trump stressed the urgent need for this law, emphasizing the harm caused by such abuses and the importance of accountability. The act aims to prevent further harm and protect individuals from technological misuse. ALSO READ: Did Beyoncé take $10 million to appear at Kamala Harris' rally? Trump makes big claim The Take It Down Act The 'Take It Down Act' criminalizes the distribution of non-consensual intimate imagery — including 'revenge porn' that features real images and artificial intelligence-generated photos and videos. After Trump, 78, signed the legislation, he passed the bill and a pen to his wife and asked for her signature in a reflection of her advocacy. 'She deserves to sign it,' the president said, adding that 'America is blessed to have such a dedicated and compassionate first lady.' Melania Trump is believed to be the first lady to sign a piece of legislation alongside a sitting president. 'This legislation is a powerful step forward in our efforts to ensure that every American, especially young people, can feel better protected from their image or identity being abused,' said Melania Trump, 55. 'Artificial intelligence and social media are the digital candy for the next generation — sweet, addictive and engineered to have an impact on the connectivity development of our children,' she added. 'But unlike sugar, these new technologies can be weaponized, shape beliefs and sadly, affect emotions.' ALSO READ: FBI chief Kash Patel, accused of spending more time at nightclubs than office, makes big announcement 'We've all heard about deep fakes. I have them all the time, but nobody does anything,' said the president during his remarks. 'I ask [Attorney General] Pam [Bondi], 'Can you help me, Pam?' She says, 'No, I'm too busy, too busy doing other things, don't worry, you'll survive.' But a lot of people don't survive.' Ahead of Monday's bill signing, 49 states had laws barring 'revenge porn' on the books, with South Carolina the lone exception. The new federal law passed the House 409-2 and the Senate by unanimous consent, becoming one of the first pieces of legislation criminalizing aspects of AI. Perpetrators face up to two years in federal prison if the images feature an adult or three years if they depict a minor. Websites must remove content within 48 hours of notification that the images violated the law, with enforcement delegated to the Federal Trade Commission.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store