
French book rights owners take on Meta over AI
'We have established the presence of many works published by SNE members in the body of data used by Meta,' SNE chief Vincent Montagne said in the statement.
Meta has acknowledged using a database, Books3, containing the full texts of around 200,000 books including some in French to train its Llama large language model.
In a separate US court case launched by authors, the company admitted last year to using the database until 2023, claiming that the AI training constituted 'fair use' of copyright-protected books.
French publishers and authors have not publicly communicated an estimated value of the harm to them by Meta.
Their case at the Paris judicial court 'should lead to a serious desire emerging on the part of AIs to take the creative industries into account,' SGDL head Christophe Hardy said.
He called on AI developers to 'respect the legal framework and, where relevant, find compensation for the use of works that feed into' the technology.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Euronews
14 minutes ago
- Euronews
Austrian newspaper's consent model violates EU privacy rules: court
The Austrian Federal Administrative Court on Monday said that newspaper Der Standard violated the EU's data protection rules when introducing a "pay or OK' model on its website, confirming an earlier decision by Austria's Data Protection Authority (DSB). This "pay or OK" approach – introduced on the newspaper's website when the EU's General Data Protection Regulation (GDPR) came into force in 2018 – gives users a choice between paying a monthly €9,90 fee to access the website without having their data tracked, or giving consent to data collection and processing for targeted advertising. The Austrian privacy watchdog already said in its decision in 2023, that the approach of one of Austria's most-read newspapers was unlawful, as it only allowed a global consent or rejection. The EU's privacy law requires the option to consent to specific types of processing. Der Standard appealed this decision and argued that such a "granular" consent is not doable in a "pay or OK" system, as it required tracking and statistics to sell its advertisement in the non-paid version. The Austrian Federal Administrative Court now confirmed the data protection decision and ruled that the newspaper did not operate in line with the EU's privacy rules. Max Schrems, Austrian lawyer and privacy activist at NOYB, said in a statement: "'Pay or OK' undermines a core pillar of the GDPR: freely given consent. Instead of a genuine choice of users, we get a North Korean consent rate of 99.9% with this system.' NOYB argues that when asked, only 1 to 7% of users want to be tracked for online advertisement. However, with the "pay or OK" model, almost all users will agree to online tracking. The court allows an appeal to Austria's Supreme Administrative Court, making it likely that the country's highest court will refer the case to the EU Court of Justice. In another "pay or OK" case, the European Commission fined Meta €200 million in April for breaching the Digital Markets Act (DMA) with its consent model. Meta offers users the choice to either pay a subscription fee for an ad-free experience or consent to personalized advertising by allowing the tech giant to track and process their data. The Commission said Meta did not "give users the required specific choice to opt for a service that uses less of their personal data but is otherwise equivalent to the 'personalised ads' service."


Euronews
44 minutes ago
- Euronews
UK drops demand to access global Apple users' data, US spy chief says
The United Kingdom has backed down from its demand that Apple allow investigators to access data from users anywhere in the world, according to the top US intelligence official. US Director of National Intelligence Tulsi Gabbard wrote on social media platform X that the UK had rescinded its request for the tech company to provide the government with a 'back door' to access user data, which she said would have 'enabled access to the protected encrypted data of American citizens and encroached on our civil liberties'. The row began months ago, when the UK's Home Office served Apple with a formal notice that required blanket access to encrypted data. The notice, which the UK government has not made public, was first reported by The Washington Post. But even Apple could not access this data without breaking its own encryption methods. In 2022, it introduced an Advanced Data Protection (ADP) feature that offers end-to-end encryption for photos, notes, and iCloud backup. 'We have never built a backdoor or master key to any of our products or services and we never will,' the company says. In response to the UK demand, Apple withdrew ADP from the UK market in February and filed a legal challenge to the order. It's not clear if that will continue. Apple will share user data in certain cases. It says that in the UK, it could share personal information about its users – for example their name and phone number – if it determines that it is necessary for 'national security, law enforcement, or other issues of public importance'. In the United States, Apple says it could provide basic account information, for example a user's email, contacts, and calendar, with 'customer consent' or in response to a US search warrant issued with probable cause.


Euronews
44 minutes ago
- Euronews
Will regulation make the EU the most trusted power in AI?
The EU is inviting companies that create generative chatbots such as ChatGPT, Mistral, Gemini and Claude to sign a voluntary Code of Conduct on general-purpose AI. By signing the code and adhering to the rules, they are deemed compliant with the AI Act, an EU law that came into force in 2024 and defines four risk levels for the use of AI, from minimal to unacceptable. Companies that refuse to sign the code may face more stringent inspections and administrative burdens. Major players like OpenAI and Anthropic support the code, while others, like Meta, refuse to sign it. "Since the drafting process began last September, Meta has been very critical of the code, saying that it stifles innovation," said Cynthia Kroet, senior tech policy reporter at Euronews. "They've rolled out a few tools that they cannot fully use in Europe, also because of data protection rules. In the end it doesn't matter much if they sign or not because the AI Act will prevail anyway," she added. The AI Law will be implemented progressively through 2027. This month, rules for general-purpose AI models, such as the generative chatbots mentioned above, will come into effect and companies have two years to adapt. However, future models entering the market will be required to comply with the law immediately and, in case of violation, the Commission may impose a fine of up to €15 million. Regulation and investment are not opposed? The code of conduct sets out suggestions on how to respect copyright, standards to avoid systemic risks from advanced AI models, and advice on filling out a form that encourages transparency on how they comply with the AI Law. Some analysts argue that the EU is using the regulation position itself strategically as the most trusted AI provider globally. The US and China have less comprehensive regulatory approaches and are focusing primarily on attracting large investments into the sector. However, Laura Lázaro Cabrera, advisor to the Center for Democracy and Technology, says the two need to go hand in hand. "The EU has made great strides towards strengthening the financial support that it provides to AI development in Europe. Just this year, over €200 billion have been announced for AI investment," Laura Lázaro Cabrera said. "Finances are an important part of the equation, and indeed it is important for the EU to maintain a leadership role in the development of AI, but we think that that leadership has to be tied to a strong safety framework that promotes fundamental rights and that promotes people-centred AI systems," the adviser added. Deepfakes, theft of confidential data, suicides linked to the use of chatbots are some examples of the risks of generative AI. Laura Lázaro Cabrera hoped that the obligations related to AI literacy for companies will also lead to EU-wide campaigns and training for citizens, helping them understand the benefits and risks of this revolutionary technology. Watch the video here! Journalist: Isabel Marques da Silva Content production: Pilar Montero López Video production: Zacharia Vigneron Graphism: Loredana Dumitru Editorial coordination: Ana Lázaro Bosch and Jeremy Fleming-Jones