
Deepfakes Are Spreading — Can The Law Keep Up?
A comparison of an original and deepfake video of Facebook CEO Mark Zuckerberg. (Elyse Samuels/The ... More Washington Post via Getty Images)
When an explicit AI-generated video of Taylor Swift went viral in January, platforms were slow to take it down, and the law offered no clear answers.
With the lack of an organized regulatory structure in place for victims — famous or not — states are scrambling to fill in the void, some targeting political ads, others cracking down on pornographic content or identity fraud. That has led to a patchwork of laws enforced differently across jurisdictions drawing varying lines between harm and protected speech.
In April, prosecutors in Pennsylvania invoked a newly enacted law to charge a man in possession of 29 AI-generated images of child sexual abuse — one of the first known uses of a state law to prosecute synthetic child abuse imagery.
What began as a fringe concern straight out of dystopian fiction — that software could persuasively mimic faces, voices and identities — is now a predominant issue in legal, political and national security debates. Just this year, over 25 deepfake-related bills have been enacted across the U.S., according to Ballotpedia. As laws finally begin to narrow the gap, the resistance and pushback is also escalating.
Consumer-grade diffusion models — used to create realistic media of people, mimic political figures for misinformation and facilitate identity fraud — are spreading through servers and subreddits with a virality and scale that's making it difficult for regulators to track, legislate upon and take down.
'We almost have to create our own army,' said Laurie Segall, CEO of Mostly Human Media. 'That army includes legislation, laws and accountability at tech companies, and unfortunately, victims speaking up about how this is real abuse.'
'It's not just something that happens online,' added Segall. 'There's a real impact offline.'
Many recent laws pertain directly to the accessibility of the technology.
Tennessee's new felony statute criminalizes the creation and dissemination of nonconsensual sexual deepfakes, carrying up to 15 years in prison. In California, where a record eight bills on AI-generated content passed in a single month, legislators have been attempting to regulate a wide range of related issues, from election-related deepfakes to how Hollywood uses deepfake technology.
These measures also reflect the increasing use of AI-generated imagery in crimes, often involving minors, and often on mainstream platforms, but the legal terrain remains a confusing minefield for victims.
Depending on the state, the same deepfake image might be criminal in one jurisdiction but dismissed in another, indicating the growing chaos and discrepencies of state-level governance in the absence of federal standards.
'A young woman whose Instagram profile photo has been used to generate an explicit image would likely have legal recourse in California, but not Texas,' notes researcher Kaylee Williams. 'If the resulting image isn't considered realistic, it may be deemed criminal in Indiana, but not in Idaho or New Hampshire.'
If the person who generated the image claims to have done so out of 'affection' rather than malice, the victim could seek justice in Florida, but not Virginia, Williams adds.
Intimate deepfakes are the latest iteration of the dehumanization of women and girls in the digital sphere, states Williams, calling it 'a rampant problem that Congress has thus far refused to meaningly address.'
According to a recent study by child exploitation prevention nonprofit Thorn, one in 10 teens say they know someone who had deepfake nude imagery created of them, while one in 17 say they have been a direct victim of this form of abuse.
The harm also remains perniciously consistent: a 2019 study from cybersecurity firm Deeptrace found that a whopping 96% of online deepfake video content was of nonconsensual pornography.
Despite the widespread harm, the recent legislative push has met with notable resistance.
In California, a lawsuit filed last fall by right-wing content creator Chris Kohls — known as Mr Reagon on X — drew support from The Babylon Bee, Rumble and Elon Musk's X. Kohls challenged the state's enforcement of a deepfake law after posting an AI-generated video parodying a Harris campaign ad, arguing that the First Amendment protects his speech as satire.
The plaintiffs contend that laws targeting political deepfakes, particularly those aimed at curbing election misinformation, risk silencing legitimate satire and expression. A federal judge agreed, at least partially, issuing an injunction that paused enforcement of one of the California laws, warning that it 'acts as a hammer instead of a scalpel.'
Theodore Frank, an attorney for Kohls, said in a statement they were 'gratified that the district court agreed with our analysis.'
Meanwhile, Musk's X in April filed a separate suit against Minnesota over a similar measure, contending that the law infringes on constitutional rights and violates federal and state free speech protections.
'This system will inevitably result in the censorship of wide swaths of valuable political speech and commentary,' the lawsuit states.
'Rather than allow covered platforms to make their own decisions about moderation of the content at issue here, it authorizes the government to substitute its judgment for those of the platforms,' it argues.
This tug-of-war remains a contentious topic in Congress. On May 22, the House of Representatives passed the 'One Big Beautiful' bill which includes a sweeping 10-year federal moratorium on state-level AI laws.
Legal scholar and Emory University professor Jessica Roberts says Americans are entirely left vulnerable without state involvement. 'AI and related technologies are a new frontier, where our existing law can be a poor fit,' said Roberts.
'With the current congressional gridlock, disempowering states will effectively leave AI unregulated for a decade. That gap creates risks — including bias, invasions of privacy and widespread misinformation.'
Meanwhile, earlier this month, President Trump signed the Take It Down Act, which criminalizes the distribution of non-consensual explicit content — including AI-generated images — and mandates rapid takedown protocols by platforms. It passed with broad bipartisan support, but its enforcement mechanisms remain unclear at best.
Financial institutions are increasingly sounding the alarm over identity fraud.
In a speech in March, Michael S. Barr of the Federal Reserve warned that 'deepfake technology has the potential to supercharge impersonation fraud and synthetic identity scams.'
There's merit to that: In 2024, UK-based engineering giant Arup was defrauded out of $25 million via a deepfake video call with what appeared to be a legitimate senior executive. And last summer, Ferrari executives reportedly received WhatsApp voice messages mimicking their CEO's voice, down to the regional dialect.
Against that evolving threat landscape, the global regulatory conversation remains a hot-button issue with no clear consensus.
In India, where deepfakes currently slip through glaring legal lacunae, there is growing demand for targeted legislation. The European Union's AI Act takes a more unified approach, classifying deepfakes as high-risk and mandating clear labeling. China has gone even further, requiring digital watermarks on synthetic media and directing platforms to swiftly remove harmful content — part of its broader strategy of centralized content control.
However, enforcement across the board continues to be difficult and elusive, especially when the source code is public, servers are offshore, perpetrators operate anonymously and the ecosystem continues to enable rampant harm.
In Iowa, Dubuque County Sheriff Joseph L. Kennedy was reportedly dealing with a local case where high school boys shared AI-generated explicit pictures of their female classmates. The tech was rudimentary, but worked enough to cause serious reputational damage. 'Sometimes, it just seems like we're chasing our tails,' Kennedy told the New York Times.
That sense of disarray may also be relevant to regulators as they look to govern a future whose rules are being constantly written — and rewritten — in code.
In many ways, the deepfake issue appears increasingly Kafkaesque: a bewildering maze of shifting identities, elusive culprits, a tech bureaucracy sprawling beyond regulatory reach — and laws that are always lagging at least a few steps behind.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNN
15 minutes ago
- CNN
Last look: Should Mexico be electing its judges?
Fareed explains how Mexico's controversial judicial elections could radically reshape the country's rule of law — for the worse.
Yahoo
20 minutes ago
- Yahoo
New data confirms it: AI is taking human jobs
In March, Shopify's CEO told his managers he was implementing a new rule: Before asking for more head count, they had to prove that AI couldn't do the job as well as a human would. A few weeks later, Duolingo's CEO announced a similar decree and went even further — saying the company would gradually phase out contractors and replace them with AI. The announcements matched what I've been hearing in my own conversations with employers: Because of AI, they are hiring less than before. When I first started reporting on ChatGPT's impact on the labor market, I thought it would take many years for AI to meaningfully reshape the job landscape. But in recent months, I've found myself wondering if the AI revolution has already arrived. To answer that question, I asked Revelio Labs, an analytics provider that aggregates huge reams of workforce data from across the internet, to see if it could tell which jobs are already being replaced by AI. Not in some hypothetical future, but right now — today. Zanele Munyikwa, an economist at Revelio Labs, started by looking at the job descriptions in online postings and identifying the listed responsibilities that AI can already perform or augment. She found that over the past three years, the share of AI-doable tasks in online job postings has declined by 19%. After further analysis, she reached a startling conclusion: The vast majority of the drop took place because companies are hiring fewer people in roles that AI can do. Next, Munyikwa segmented all the occupations into three buckets: those with a lot of AI-doable tasks (high-exposure roles), those with relatively few AI-doable tasks (low-exposure roles), and those in between. Since OpenAI released ChatGPT in 2022, she found, there has been a decline in job openings across the board. But the hiring downturn has been steeper for high-exposure roles (31%) than for low-exposure roles (25%). In short, jobs that AI can perform are disappearing from job boards faster than those that AI can't handle. Which jobs have the most exposure to AI? Those that handle a lot of tech functions: database administrators, IT specialists, information security, and data engineers. The jobs with the lowest exposure to AI, by contrast, are in-person roles like restaurant managers, foremen, and mechanics. This isn't the first analysis to show the early impact of AI on the labor market. In 2023, a group of researchers at Washington University and New York University homed in on a set of professionals who are particularly vulnerable: freelancers in writing-related occupations. After the introduction of ChatGPT, the number of jobs in those fields dropped by 2% on the freelancing platform Upwork — and monthly earnings declined by 5.2%. "In the short term," the researchers wrote, "generative AI reduces overall demand for knowledge workers of all types." At Revelio Labs, Munyikwa is careful about expanding on the implications of her own findings. It's unclear, she says, if AI in its current iteration is actually capable of doing all the white-collar work that employers think it can. It could be that CEOs at companies like Shopify and Duolingo will wake up one day and discover that hiring less for AI-exposed roles was a bad move. Will it affect the quality of the work or the creativity of employees — and, ultimately, the bottom line? The answer will determine how enduring the AI hiring standstill will prove to be in the years ahead. Some companies already appear to be doing an about-face on their AI optimism. Last year, the fintech company Klarna boasted that its investment in artificial intelligence had enabled it to put a freeze on human hiring. An AI assistant, it reported, was doing "the equivalent work of 700 full-time agents." But in recent months, Klarna has changed its tune. It has started hiring human agents again, acknowledging that its AI-driven cost-cutting push led to "lower quality." "It's so critical that you are clear to your customer that there will always be a human," CEO Sebastian Siemiatkowski told Bloomberg. "Really investing in the quality of the human support is the way of the future for us." Will there be more chastened Siemiatkowskis in the months and years ahead? I'm not betting on it. All across tech, chief executives share an almost religious fervor to have fewer employees around — employees who complain and get demotivated and need breaks in all the ways AI doesn't. At the same time, the AI tools at our disposal are getting better and better every month, enabling companies to shed employees. As long as that's the case, I'm not sure white-collar occupations face an optimistic future. Even Siemiatkowski still says he expects to reduce his workforce by another 500 through attrition in the coming year. And when Klarna's technology improves enough, he predicts, he'll be able to downsize at an even faster pace. Asked when that point will come, he replied: "I think it's very likely within 12 months." Aki Ito is a chief correspondent at Business Insider. Read the original article on Business Insider
Yahoo
20 minutes ago
- Yahoo
BarEhud Barak: Israel Must Back Trump's Gaza Deal
U.S. President Donald Trump greets Israeli Prime Minister Benjamin Netanyahu as he arrives at the White House on April 7, 2025, in Washington, D.C. Credit - Alex Wong—Getty Images In the coming few days, Prime Minister Benjamin Netanyahu will face a defining choice between a politically motivated "war of deception" in Gaza and a deal to release all hostages while ending the war. He must choose between his extreme-right ministers—Itamar Ben-Gvir and Bezalel Smotrich—or aligning with Donald Trump. There is no symmetry here. Accepting a hostage deal, ending the war, and working with Trump and free world leaders, won't be effortless. Any choice requires detailed negotiations and compromises. But this path is far superior to any realistic alternative. Based on the achievements of the Israel Defense Forces—including damage to Hamas, weakening Hezbollah, destroying Syria's military arsenal during Assad's collapse, and demonstrating Israel's capability to strike deep into Iran—Israeli leadership could, from a position of strength, pursue releasing all hostages simultaneously, halt this senseless war, end the humanitarian crisis, and uproot Hamas from power. This would enable Israel, though belatedly, to join Trump's vision of a New Middle East, including normalization with Saudi Arabia, regional deployment to tackle the Iranian challenge, and participation in the trade corridor project from India through the Gulf to Europe. Choosing a "war of deception" instead—where misleading propaganda presents political warfare as serving Israel's security—would be a grave mistake. It's highly doubtful that continuing the war could produce results different from previous Gaza rounds over the past 20 months. But it would certainly constitute a death sentence for some or most living hostages and deepen the diplomatic tsunami and International Criminal Court claims Israel already faces. This approach might make sense if it could achieve "total victory" over Hamas, but that won't happen. When this new war inevitably halts—under diplomatic pressure, humanitarian crisis, battlefield events, or domestic political developments—we would find ourselves in precisely the same situation as today. To understand, examine recent history. The October 7th barbaric attack created a compelling imperative for Israel to ensure Hamas could never again reign over Gaza or threaten Israel from there. The question was how to achieve this goal. Since Ben-Gurion, Israel has followed four strategic maxims: wars should be aggressive, fought on enemy territory, ended quickly to translate battlefield results into diplomatic and political realities while maintaining international legitimacy, and—extremely important—never lose the moral high ground. That's how we won in 1967 in six days and 1973 in three weeks. Netanyahu has betrayed almost all these principles. Read More: The Israel-Hamas Ceasefire Was Never Going to Last Another strategic maxim, from Clausewitz to Kissinger, holds that war must have a clearly defined, operationally feasible political purpose. As the Roman saying goes: "If you don't know which port you want to reach, no wind will take you there." This maxim was deliberately ignored. Netanyahu has blocked any discussion of this issue since October 7th, 2023. It was clear to any serious observer that Hamas suffered major military blows daily, losing most weapons systems and leadership figures since October 7th. However, since any Hamas group or individual can easily "disappear" within minutes, hiding among the Strip's 2 million civilians and emerging from tunnels or building windows to attack Israelis, their absolute elimination remains a Sisyphean task. Even after 58 years in the West Bank, we never fully eliminated Hamas' presence in Jenin or Tulkarm. The only way to ensure Hamas cannot reign over Gaza and threaten Israel is by replacing it with another governing entity legitimate to the international community, Arab neighbors like Egypt, UAE, and Saudi Arabia, and Palestinians themselves. Practically, this means a temporary inter-Arab force backed by the Arab League, potentially supported by UN Security Council resolution, funded by Saudi Arabia and UAE, with a technocratic government overseeing Palestinian bureaucracy and a new, non-Hamas security body trained by the inter-Arab force under U.S. supervision. Israel would present only two conditions: no Hamas military branch member could participate in the new entity's organs, and the IDF, initially deployed to the Strip's perimeter, would withdraw to the border only after all pre-agreed security benchmarks are met. This plan, easily implementable a year ago, and appearing to save Gaza and Gazans from further destruction, is harder now, because it could be interpreted as saving Israel from sinking into Gazan mud. But the plan remains viable despite the Israeli government's refusal to consider it. Since this is the only practical "day after" plan, there's no sense sacrificing hostages' lives or endangering Israeli troops in pointless warfare. Who can look into the eyes of future bereaved parents, newly widowed spouses, new orphans, disabled and traumatized soldiers, and claim with clear conscience that everything was done to prevent loss, or that it had justification? As long as Israel rejects hostage release and war's end, the risk increases of international initiatives, including Arab neighbors calling for Israel boycotts and steps toward recognition of a Palestinian state by European countries—many of them stable friends of Israel. Read More: I Am a Former Hamas Hostage. Here's My Message to Donald Trump and Benjamin Netanyahu Permanent occupation of the Gaza Strip, population transfer of 2 million Palestinians, and Israeli resettlement on the that land are base and delusional visions that would backfire and accelerate confrontation with the world. Why is Netanyahu, an intelligent, experienced, savvy politician, failing? The answer isn't simple. Netanyahu has ruled since 2015 through an alliance with ultra-Orthodox parties who don't serve in the army and care only about sectoral needs, and since January 2023 added ultra-right zealots believing Gaza resettlement and Palestinian transfer are heavenly orders. He's caught in a dilemma: 80% of the public sees him as primarily responsible for the country's worst day, 60% believe he should resign. A heavy majority perceives his judicial reform, initiated immediately after January 2023 elections, as a "judicial coup d'état"—an attempt to castrate the legislative branch and demolish Supreme Court independence. Many believe the aim of his blatant attack on democracy is to escape his bribery, fraud, and breach of trust court case. For him, any pause in the war—even 60 days, certainly longer—would immediately bring reckoning and accountability: accelerated court proceedings; demands for national inquiry committee investigating October 7th, and events before, during and after; coalition meltdown; and probable disgraced ejection from public life. I believe Netanyahu genuinely wants all hostages home. But when this clashes with immediate threats to his political survival, he prefers leaving them in Gaza. He has already torpedoed several hostage deal opportunities, and seems to be doing it once again over the weekend, by resisting U.S. guarantee to Hamas for an end to the war in exchange for release of all hostages and entering, together with the Trump Administration, into Trump's New Middle East Order (to include the replacement of Hamas, described above). Netanyahu sticks to his eternal war in order to avoid a pause in fighting, which might lead to the end of his political career. This behavior is unacceptable to Israel and Israelis. We are, as former Supreme Court President Aharon Barak wrote years ago, 'defending democracy' that "should be capable of defending itself against those who try to use the very freedoms and tools it provides to destroy it from within." We're led by someone who lost his strategic and moral compass, dragging the nation into war motivated by personal political interests against our security and common future. Israel urgently needs new, sober leadership with clear realistic vision and self-confidence—leadership capable of reading our people's soul, understanding partners' and rivals' minds, and above all, having courage to make decisions and power to implement them. The world will pass judgment. But the burden of bringing Israel back on track is ours—Israeli citizens. I believe we will overcome. This war will end soon, and Israel's worst ever government will be replaced by a responsible, effective one. A long path of repair must follow. Contact us at letters@