
Denmark plans to thwart deepfakers by giving everyone copyright over their own features
The Danish government is planning to tackle the issue of AI-generated deepfakes by granting citizens property rights over their likeness and voice.
The proposed legislation would mean that people who find that their features have been used to create a deepfake would have the right to ask the platforms that host the content to take it down, Danish Culture Minister Jakob Engel-Schmidt told CNN on Friday.
Engel-Schmidt believes that 'technology has outpaced legislation' and the proposed law would help to protect artists, public figures and ordinary people from digital identity theft, which he said is now possible with just a few clicks thanks to the power of generative AI.
'I think we should not accept a situation where human beings can be run through, if you would have it, a digital copy machine and misused for all sorts of purposes,' he said.
He cited the example of musical artists who have discovered songs online purporting to be theirs, but which have in fact been made using AI clones of their voice.
One such case involves Canadian singer Celine Dion, who in March warned fans about AI-generated content featuring her voice and likeness that was circulating online.
And in April 2024, more than 200 artists, including Billie Eilish, Kacey Musgraves, J Balvin, Ja Rule, Jon Bon Jovi, the Jonas Brothers, Katy Perry and Miranda Lambert, signed an open letter speaking out against AI-related threats in the music industry.
Engel-Schmidt says he has secured cross-party support for the bill, and he believes it will be passed this fall.
Once the legislation is passed, Engel-Schmidt believes a second step would be to introduce more legislation that could impose fines on companies that do not comply with requests to remove content featuring an AI-generated deepfake.
'We are champions of freedom of speech, we would like everyone to be heard, but we also believe that human beings have the right to say yes and no to them being used by generative AI,' he said.
As for whether he has discussed the proposed legislation with tech companies, Engel-Schmidt said: 'Not yet, but I'm looking forward to it. I think it's in their interest as well to make AI work for humanity, not against, you know, artists, popular figures and ordinary people.'
Athina Karatzogianni, a professor of technology and society at the University of Leicester, England, told CNN that the Danish proposal is one of hundreds of policy initiatives around the world looking to reduce the possible harms associated with the misuse of generative AI.
'Deepfakes can have both individual and social impact, because they can both harm individual rights and also (have) sociopolitical impacts, because they undermine the values that are fundamental to a democracy, such as equality and transparency,' said Karatzogianni.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
24 minutes ago
- Yahoo
Congress might block state AI laws for a decade. Here's what it means.
A federal proposal that would ban states and local governments from regulating AI for 10 years could soon be signed into law, as Sen. Ted Cruz (R-TX) and other lawmakers work to secure its inclusion into a GOP megabill ahead of a key July 4 deadline. Those in favor – including OpenAI's Sam Altman, Anduril's Palmer Luckey, and a16z's Marc Andreessen – argue that a 'patchwork' of AI regulation among states would stifle American innovation at a time when the race to beat China is heating up. Critics include most Democrats, several Republicans, Anthropic's CEO Dario Amodei, labor groups, AI safety nonprofits, and consumer rights advocates. They warn that this provision would block states from passing laws that protect consumers from AI harms and would effectively allow powerful AI firms to operate without much oversight or accountability. The so-called 'AI moratorium' was squeezed into the budget reconciliation bill, nicknamed the 'Big Beautiful Bill,' in May. It is designed to prohibit states from '[enforcing] any law or regulation regulating [AI] models, [AI] systems, or automated decision systems' for a decade. Such a measure could preempt state AI laws that have already passed, such as California's AB 2013, which requires companies to reveal the data used to train AI systems, and Tennessee's ELVIS Act, which protects musicians and creators from AI-generated impersonations. The moratorium's reach extends far beyond these examples. Public Citizen has compiled a database of AI-related laws that could be affected by the moratorium. The database reveals that many states have passed laws that overlap, which could actually make it easier for AI companies to navigate the 'patchwork.' For example, Alabama, Arizona, California, Delaware, Hawaii, Indiana, Montana and Texas have criminalized or created civil liability for distributing deceptive AI-generated media meant to influence elections. The AI moratorium also threatens several noteworthy AI safety bills awaiting signature, including New York's RAISE Act, which would require large AI labs nationwide to publish thorough safety reports. Getting the moratorium into a budget bill has required some creative maneuvering. Because provisions in a budget bill must have a direct fiscal impact, Cruz revised the proposal in June to make compliance with the AI moratorium a condition for states to receive funds from the $42 billion Broadband Equity Access and Deployment (BEAD) program. Cruz then released another revision on Wednesday, which he says ties the requirement only to the new $500 million in BEAD funding included in the bill – a separate, additional pot of money. However, close examination of the revised text finds the language also threatens to pull already-obligated broadband funding from states that don't comply. Sen. Maria Cantwell (D-WA) criticized Cruz's reconciliation language on Thursday, claiming the provision 'forces states receiving BEAD funding to choose between expanding broadband or protecting consumers from AI harms for ten years.' Currently, the provision is at a standstill. Cruz's initial revision passed the procedural review earlier this week, which meant that the AI moratorium would be included in the final bill. However, reporting today from Punchbowl News and Bloomberg suggest that talks have reopened, and conversations on the AI moratorium's language are ongoing. Sources familiar with the matter tell TechCrunch they expect the Senate to begin heavy debate this week on amendments to the budget, including one that would strike the AI moratorium. That will be followed by a vote-a-rama – a series of rapid votes on the full slate of amendments. Politico reported Friday that the Senate is slated to take an initial vote on the megabill on Saturday. Chris Lehane, chief global affairs officer at OpenAI, said in a LinkedIn post that the 'current patchwork approach to regulating AI isn't working and will continue to worsen if we stay on this path.' He said this would have 'serious implications' for the U.S. as it races to establish AI dominance over China. 'While not someone I'd typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward,' Lehane wrote. OpenAI CEO Sam Altman shared similar sentiments this week during a live recording of the tech podcast Hard Fork. He said while he believes some adaptive regulation that addresses the biggest existential risks of AI would be good, 'a patchwork across the states would probably be a real mess and very difficult to offer services under.' Altman also questioned whether policymakers were equipped to handle regulating AI when the technology moves so quickly. 'I worry that if…we kick off a three-year process to write something that's very detailed and covers a lot of cases, the technology will just move very quickly,' he said. But a closer look at existing state laws tells a different story. Most state AI laws that exist today aren't far-reaching; they focus on protecting consumers and individuals from specific harms, like deepfakes, fraud, discrimination, and privacy violations. They target the use of AI in contexts like hiring, housing, credit, healthcare, and elections, and include disclosure requirements and algorithmic bias safeguards. TechCrunch has asked Lehane and other members of OpenAI's team if they could name any current state laws that have hindered the tech giant's ability to progress its technology and release new models. We also asked why navigating different state laws would be considered too complex, given OpenAI's progress on technologies that may automate a wide range of white-collar jobs in the coming years. TechCrunch asked similar questions of Meta, Google, Amazon, and Apple, but has not received any answers. 'The patchwork argument is something that we have heard since the beginning of consumer advocacy time,' Emily Peterson-Cassin, corporate power director at internet activist group Demand Progress, told TechCrunch. 'But the fact is that companies comply with different state regulations all the time. The most powerful companies in the world? Yes. Yes, you can.' Opponents and cynics alike say the AI moratorium isn't about innovation – it's about sidestepping oversight. While many states have passed regulation around AI, Congress, which moves notoriously slowly, has passed zero laws regulating AI. 'If the federal government wants to pass strong AI safety legislation, and then preempt the states' ability to do that, I'd be the first to be very excited about that,' said Nathan Calvin, VP of state affairs at the nonprofit Encode – which has sponsored several state AI safety bills – in an interview. 'Instead, [the AI moratorium] takes away all leverage, and any ability, to force AI companies to come to the negotiating table.' One of the loudest critics of the proposal is Anthropic CEO Dario Amodei. In an opinion piece for The New York Times, Amodei said 'a 10-year moratorium is far too blunt an instrument.' 'AI is advancing too head-spinningly fast,' he wrote. 'I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off. Without a clear plan for a federal response, a moratorium would give us the worst of both worlds — no ability for states to act, and no national policy as a backstop.' He argued that instead of prescribing how companies should release their products, the government should work with AI companies to create a transparency standard for how companies share information about their practices and model capabilities. The opposition isn't limited to Democrats. There's been notable opposition to the AI moratorium from Republicans who argue the provision stomps on the GOP's traditional support for states' rights, even though it was crafted by prominent Republicans like Cruz and Rep. Jay Obernolte. These Republican critics include Senator Josh Hawley (R-MO) who is concerned about states' rights and is working with Democrats to strip it from the bill. Senator Marsha Blackburn (R-TN) also criticized the provision, arguing that states need to protect their citizens and creative industries from AI harms. Rep. Marjorie Taylor Greene (R-GA) even went so far as to say she would oppose the entire budget if the moratorium remains. Republicans like Cruz and Senate Majority Leader John Thune say they want a 'light touch' approach to AI governance. Cruz also said in a statement that 'every American deserves a voice in shaping' the future. However, a recent Pew Research survey found that most Americans seem to want more regulation around AI. The survey found that about 60% of U.S. adults and 56% of AI experts say they're more concerned that the U.S. government won't go far enough in regulating AI than they are that the government will go too far. Americans also largely aren't confident that the government will regulate AI effectively, and they are skeptical of industry efforts around responsible AI. This article has been updated to reflect newer reporting on the Senate's timeline to vote on the bill. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Associated Press
26 minutes ago
- Associated Press
Spotify Users Vote to Sell Their Data - Unwrapped DAO and SoloAI Close Historic AI Dataset Deal
Spotify users tokenize their listening data, vote on monetization, and sell it to power AI-generated music with VTuber agents. GEORGE TOWN, KY / ACCESS Newswire / June 27, 2025 / In a groundbreaking move for user data ownership, Unwrapped DataDAO-a community-led collective of Spotify users has successfully voted to license its first dataset to SoloAI, an AI-native platform powering generative entertainment through cutting-edge agent Powering the AI economy with user-owned data Members of Unwrapped pooled their Spotify listening data through the platform, earning tokenized rewards. This pool of data was then governed on-chain, with contributors voting to approve a deal with SoloAI, collectively monetizing their dataset for the first time. The vote passed with an overwhelming 99.48% majority. 'Spotify made over $11 billion last year, built on our data,' said an Unwrapped DataDAO contributor. 'This time, we got paid.' SoloAI is at the forefront of AI-driven entertainment, offering users, regardless of musical background, the ability to effortlessly compose, share, and monetize tracks using AI. The platform integrates AI-powered virtual artists, known as VTuber agents, that can compose, perform, and interact with audiences in real time. 'We're thrilled to partner with Unwrapped DAO,' said a SoloAI spokesperson. 'Access to high-quality, real-time, user-permissioned data is crucial for training our AI models. Collaborating with data collectives like Unwrapped ensures our platform evolves with ethical and innovative data sourcing at its core.' The transaction was executed using Vana's decentralized infrastructure, where users receive VRC-20 Data Tokens in exchange for pooling their data. These tokens provide contributors with programmable control, value share, and on-chain voting rights. They can also be used to grant access to the dataset. 'With VRC-20 tokens, we are creating a new and important economic primitive in the world of AI: turning private user data into a publicly tradable asset,' said Art Abal, Managing Director of the Vana Foundation. 'This unlocks a new era of possibilities for AI advancement and data ownership alike.' This collaboration marks a milestone in the rise of Data Collectives-decentralized groups built around shared user data. Instead of being passively mined for data, users now organize, govern, and profit from their data together. Links Contact InformationArt Abal Managing Director, Vana Foundation SOURCE: Vana Foundation press release


The Verge
37 minutes ago
- The Verge
Is there a Duolingo for computer language?
Posted Jun 27, 2025 at 7:33 PM UTC Is there a Duolingo for computer language? Because as this video from YouTuber @xiaomanyc suggests, we might need to learn how to speak hexadecimal so the AI overlords will like us.