logo
EU AI Act doesn't do enough to protect artists' copyright, groups say

EU AI Act doesn't do enough to protect artists' copyright, groups say

Euronews5 days ago
As the European Artificial Intelligence Act (AI Act)comes into force, groups representing artists say there are still many loopholes that need to be fixed for them to thrive in a creative world increasingly dominated by AI.
The AI Act, celebrated for being the first comprehensive legislation to regulate AI globally, is riddled with problems, these organisations say.
Groups like the European Composer and Songwriter Alliance (ECSA) and the European Grouping of Societies of Authors and Composers (GESAC) argue that it fails to protect creators whose works are used to train generative AI models.
Without a clear way to opt out or get paid when tech companies use their music, books, movies, and other art to train their AI models, experts say that their work is continually at risk.
'The work of our members should not be used without transparency, consent, and remuneration, and we see that the implementation of the AI Act does not give us,' Marc du Moulin, ECSA's secretary general, told Euronews Next.
'Putting the cart before the horse'
The purpose of the AI Act is to make sure AI stays 'safe, transparent, traceable, non-discriminatory and environmentally friendly,' the European Commission, the European Union's executive body, says in an explainer on the law.
The law rates AI companies based on four levels of risk: minimal, limited, high, or unacceptable. Those in the unacceptable range are already banned, for example AIs that are manipulative or that conduct social scoring, where they rank individuals based on behaviour or economic status.
Most generative AI falls into a minimal risk category, the Commission says. The owners of those technologies still have some requirements, like publishing summaries of the copyrighted data that companies used to train their AIs.
Under the EU's copyright laws, companies are allowed to use copyrighted materials for text and data mining, like they do in AI training, unless a creator has 'reserved their rights,' Du Moulin said.
Du Moulin said it's unclear how an artist can go about opting out of their work being shared with AI companies.
'This whole conversation is putting the cart before the horse. You don't know how to opt out, but your work is already being used,' he said.
The EU's AI Code of Practice on General-Purpose (GPAI), a voluntary agreement for AI companies, asks providers to commit to a copyright policy, put in place safeguards to avoid any infringements of rights, and designate a place to receive and process complaints.
Signatories so far include major tech and AI companies such as Amazon, Google, Microsoft, and OpenAI.
AI providers have to respect copyright laws, the Commission says
The additional transparency requirements under the AI Act give artists clarity on who has already used their material and when, du Moulin added, making it difficult to claim any payment for work that's already been scraped to train AI models.
'Even if the AI Act has some good legal implications, it only works for the future – it will not be retroactive,' Du Moulin said.
'So everything which has been scraped already … it's a free lunch for generative AI providers who did not pay anything'.
Adriana Moscono, GESAC's general manager, said some of her members tried opting out by sending letters and emails to individual AI companies to get a license for their content, but were not successful.
'There was no answer,' Moscono told Euronews Next. 'There was absolute denial of the recognition of … the need to respect copyright and to get a license. So please, European Commission, encourage licensing'.
Thomas Regnier, a Commission spokesperson, said in a statement to Euronews Next that AI providers have to respect the rights holders when they carry out text and data mining, and if there have been infringements, they can settle it privately.
The AI Act 'in no way affects existing EU copyright laws,' Regnier continued.
Mandate licence negotiations, groups ask
Du Moulin and Moscono are asking the Commission to urgently clarify the rules around opting out and copyright protection in the law.
'The code of practice, the template and the guidelines, they don't provide us any capacity to improve our situation,' Moscono said. 'They're not guaranteeing … a proper application of the AI Act'.
The advocates said the Commission could also mandate that AI companies negotiate blanket or collective licenses with the respective artist groups.
Germany's Society for Musical Performing and Mechanical Reproduction Rights (GEMA) filed two copyright lawsuits against AI companies OpenAI, the parent of ChatGPT, and Suno AI, an AI music generation app.
While not directly related to the AI Act, Du Moulin says the verdict could determine to what extent AI companies could be bound to copyright laws.
The Commission and the European Court of Justice, the EU's high court, have also signalled that they will review the text and data mining exemption in the copyright legislation issued in 2019, Du Moulin said.
New AI companies have to make sure they are compliant with the AI Act's regulations by 2026. That deadline extends to 2027 to companies already operating in the EU.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Grok, is that Gaza? AI image checks mislocate news photographs
Grok, is that Gaza? AI image checks mislocate news photographs

France 24

time6 hours ago

  • France 24

Grok, is that Gaza? AI image checks mislocate news photographs

But when social media users asked Grok where it came from, X boss Elon Musk's artificial intelligence chatbot was certain that the photograph was taken in Yemen nearly seven years ago. The AI bot's untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo. At a time when internet users are turning to AI to verify images more and more, the furore shows the risks of trusting tools like Grok, when the technology is far from error-free. Grok said the photo showed Amal Hussain, a seven-year-old Yemeni child, in October 2018. In fact the photo shows nine-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on August 2, 2025. Before the war, sparked by Hamas's October 7, 2023 attack on Israel, Mariam weighed 25 kilograms, her mother told AFP. Challenged on its incorrect response, Grok said: "I do not spread fake news; I base my answers on verified sources." The chatbot eventually issued a response that recognised the error -- but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen. The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate. Radical right bias Grok's mistakes illustrate the limits of AI tools, whose functions are as impenetrable as "black boxes", said Louis de Diesbach, a researcher in technological ethics. "We don't know exactly why they give this or that reply, nor how they prioritise their sources," said Diesbach, author of a book on AI tools, "Hello ChatGPT". Each AI has biases linked to the information it was trained on and the instructions of its creators, he said. In the researcher's view Grok, made by Musk's xAI start-up, shows "highly pronounced biases which are highly aligned with the ideology" of the South African billionaire, a former confidante of US President Donald Trump and a standard-bearer for the radical right. Asking a chatbot to pinpoint a photo's origin takes it out of its proper role, said Diesbach. "Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'." AI does not necessarily seek accuracy -- "that's not the goal," the expert said. Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016. That error led to internet users accusing the French newspaper Liberation, which had published the photo, of manipulation. 'Friendly pathological liar' An AI's bias is linked to the data it is fed and what happens during fine-tuning -- the so-called alignment phase -- which then determines what the model would rate as a good or bad answer. "Just because you explain to it that the answer's wrong doesn't mean it will then give a different one," Diesbach said. "Its training data has not changed and neither has its alignment." Grok is not alone in wrongly identifying images. When AFP asked Mistral AI's Le Chat -- which is in part trained on AFP's articles under an agreement between the French start-up and the news agency -- the bot also misidentified the photo of Mariam Dawwas as being from Yemen. For Diesbach, chatbots must never be used as tools to verify facts. "They are not made to tell the truth," but to "generate content, whether true or false", he said.

US government gets a year of ChatGPT Enterprise for $1
US government gets a year of ChatGPT Enterprise for $1

France 24

time8 hours ago

  • France 24

US government gets a year of ChatGPT Enterprise for $1

Federal workers in the executive branch will have access to ChatGPT Enterprise in a partnership with the US General Services Administration, according to the pioneering San Francisco-based artificial intelligence (AI) company. "By giving government employees access to powerful, secure AI tools, we can help them solve problems for more people, faster," OpenAI said in a blog post announcing the alliance. ChatGPT Enterprise does not use business data to train or improve OpenAI models and the same rule will apply to federal use, according to the company. Earlier this year, OpenAI announced an initiative focused on bringing advanced AI tools to US government workers. The news came with word that the US Department of Defense awarded OpenAI a $200 million contract to put generative AI to work for the military. OpenAI planned to show how cutting-edge AI can improve administrative operations, such as how service members get health care, and also has cyber defense applications, the startup said in a post. OpenAI has also launched an initiative to help countries build their own AI infrastructure, with the US government a partner in projects. The tech firm's move to put its technology at the heart of national AI platforms around the world comes as it faces competition from Chinese rival DeepSeek. DeepSeek's success in delivering powerful AI models at a lower cost has rattled Silicon Valley and multiplied calls for US big tech to protect its dominance of the emerging technology. The OpenAI for Countries initiative was launched in June under the auspices of a drive -- dubbed "Stargate" -- announced by US President Donald Trump to invest up to $500 billion in AI infrastructure in the United States. OpenAI, in "coordination" with the US government, will help countries build data centers and provide customized versions of ChatGPT, according to the tech firm.

Meta removes 6.8 million WhatsApp accounts linked to criminal scammers
Meta removes 6.8 million WhatsApp accounts linked to criminal scammers

Euronews

time11 hours ago

  • Euronews

Meta removes 6.8 million WhatsApp accounts linked to criminal scammers

WhatsApp has taken down 6.8 million accounts that were 'linked to criminal scam' centres targeting people online around that world, its parent company Meta said. The account deletions, which Meta said took place over the first six months of the year, arrive as part of wider company efforts to crack down on scams. In a Tuesday announcement, Meta said it was also rolling new tools on WhatsApp to help people spot scams, including a new safety overview that the platform will show when someone who is not in a user's contacts adds them to a group, as well as ongoing test alerts to pause before responding. Scams are becoming all too common and increasingly sophisticated in today's digital world. Too-good-to-be-true offers and unsolicited messages attempt to steal consumers' information or money, with scams filling our phones, social media, and other corners of the internet each day. Meta noted that 'some of the most prolific' sources of scams are criminal scam centres, which often span from forced labour operated by organised crime – and warned that such efforts often target people on many platforms at once, in attempts to evade detection. That means that a scam campaign may start with messages over text or a dating app, for example, and then move to social media and payment platforms, Meta said. Meta, which also owns Facebook and Instagram, pointed to recent scam efforts that it said attempted to use its own apps – as well as TikTok, Telegram, and AI-generated messages made using ChatGPT – to offer payments for fake likes, to enlist people into a pyramid scheme, or to lure others into cryptocurrency investments. Meta linked these scams to a criminal scam center in Cambodia and said it disrupted the campaign in partnership with ChatGPT maker OpenAI.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store