French publishers and authors sue Meta over copyright works used in AI training
Three trade groups said they were launching legal action against Meta in a Paris court over what they said was the company's 'massive use of copyrighted works without authorization' to train its generative AI model.
The National Publishing Union, which represents book publishers, has noted that 'numerous works' from its members are turning up in Meta's data pool, the group's president, Vincent Montagne, said in a joint statement.
Meta didn't respond to a request for comment. The company has rolled out generative-AI powered chatbot assistants to users of its Facebook, Instagram and WhatsApp platforms.
Montagne accused Meta of 'noncompliance with copyright and parasitism.'
Another group, the National Union of Authors and Composers, which represents 700 writers, playwrights and composers, said the lawsuit was necessary to protect members from 'AI which plunders their works and cultural heritage to train itself.'
The union is also worried about AI that 'produces 'fake books' which compete with real books,' the union's president, Francois Peyrony, said.
The third group involved in the lawsuit, the Societe des Gens de Lettres, represents authors. They all demand the 'complete removal' of data directories Meta created without authorization to to train its AI model.
Under the European Union's sweeping Artificial Intelligence Act, generative AI systems must comply with the 27-nation bloc's copyright law and be transparent about the material they used for training.
It's the latest example of the clash between the creative and publishing industries and tech companies over data and copyright.
British musicians released a silent album last month to protest the U.K. government's proposed changes to artificial intelligence laws that artists fear will erode their creative control.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Entrepreneur
42 minutes ago
- Entrepreneur
Google AI Pioneer Employee Says to Stay Away From AI PhDs
Jad Tarifi, who founded Google's first generative AI team and has a Ph.D. from the University of Florida, wouldn't recommend starting a program now. AI researchers are in high demand, with some offered billion-dollar compensation packages from Meta amid the ongoing AI talent wars. However, one AI pioneer, Jad Tarifi, who founded Google's first generative AI team after obtaining a Ph.D. in AI, would not recommend higher study to break into the field. In a new interview with Business Insider, Tarifi, 42, predicted that within the five to seven years it takes to obtain a Ph.D., most of AI's problems will be solved. "Even things like applying AI to robotics will be solved by then," Tarifi told BI. Related: AI Is Going to 'Replace Everybody' in Several Fields, According to the 'Godfather of AI.' Here's Who He Says Should Be 'Terrified.' Tarifi explained that obtaining a Ph.D. was only for "weird people" who were "obsessed" with a certain field because higher education required "a lot of pain" and at least five years of their lives. He recommended staying away from the Ph.D. route altogether or choosing to specialize in a subfield of AI that is still in its early stages, like AI for biology. Tarifi received a Ph.D. in 2012 from the University of Florida, where he worked on an AI theory that combined principles from neuroscience, geometry, and machine learning, according to his LinkedIn. He then joined Google, where he became a tech lead and manager for nearly a decade, working on models for Google's generative AI projects. Tarifi is now the founder and CEO of Integral AI, a startup that focuses on creating AI agents to act autonomously on behalf of users. Related: These 3 Professions Are Most Likely to Vanish in the Next 20 Years Due to AI, According to a New Report In the BI interview, Tarifi also warned prospective students from completing degrees in law and medicine, arguing that the information in these programs was "outdated" and memorization-based. Tarifi isn't the first person to warn students away from higher degrees. Venture capitalist Victor Lazarte said earlier this year that AI is "fully replacing people" in the legal profession. He predicted that AI would take over entry-level legal positions usually filled by recent law school graduates within the next three years. Join top CEOs, founders and operators at the Level Up conference to unlock strategies for scaling your business, boosting revenue and building sustainable success.


Eater
an hour ago
- Eater
Los Angeles's Best Cheesesteak Heads to Calabasas and Pasadena in 2025
The year 2025 is turning out to be a busy one for the Matu crew. The team opened Matu Kai on April 6 in Brentwood, and recently announced that a forthcoming casual Cheesesteaks by Matu will debut in Pasadena and Calabasas later this year. The new cheesesteak destination comes from restaurateur Jerry Greenberg, co-founder of the Sushi Nozawa Group, which owns Sugarfish, Nozawa Bar, and KazuNori, and the Original Hand Roll Bar. Sushi Nozawa's sister restaurants are HiHo Cheeseburger and UOVO. Matu (stylized as matū) first opened in Beverly Hills in 2021 with a tasting menu that centered on 100 percent grass-fed Wagyu beef. Staff started serving wagyu cheesesteak at the Beverly Hills location, which became a popular dish at both restaurants. At Cheesesteaks by Matu, the tema will serve two items: cheesesteaks and homemade potato chips. The Pasadena location on East Colorado Boulevard will debut in late fall, while the second will launch operations out of the Commons at Calabasas in the winter months. In a statement, the founders shared that the group was looking for a space to house HiHo Cheeseburger in Pasadena when it decided to place both restaurants on the same site. Once open, both Pasadena and Calabasas HiHo and Cheesesteaks by Matu will operate separately. BLVD Steak is open in Calabasas Commons In more Calabasas news, Sherman Oaks' popular BLVD Steak took over the shuttered Crossroads Kitchen space and opened in late June. Everything is the same, except the restaurant is open all day with a lunch menu that includes a burger, French dip sandwich, rack of ribs, salads, and sushi more suitable to daytime hours in Kardashian country. A peek inside Downtown's Uncle Ollie's Penthouse You'll want to read the Los Angeles Times' profile of Uncle Ollie's Penthouse, the new Downtown bar that might be the antidote to the region's fading nightlife, with an arcade, dance room, burlesque performances, and Gong Show karaoke. It's nestled on the border of Little Tokyo and Skid Row, which one outlet describes as SkidRokyo. A thorough guide to dining in the SFV The Los Angeles Times food section put some serious effort into this San Fernando Valley dining guide. It lists 65 restaurants that comprise the vast region, including Les Sisters in Chatsworth, Vinh Loi Tofu, and Shin Sushi. For an even tighter curation, visit the Eater San Fernando Valley list. L.A. Taco keeps up with ICE raids in Los Angeles Since the Immigration and Customs Enforcement (ICE) initiated raids in early June, Los Angeles news outlet L.A. Taco has dedicated daily coverage on Instagram and its main website. The raids have left Los Angeles residents and restaurants reeling. Below is their most recent report.


Politico
2 hours ago
- Politico
Should chatbots be regulated like people?
The AI boom is ushering in chatbots that act – more and more – like people. OpenAI's GPT-4.5 can ace the Turing Test, which evaluates whether a machine can fool a user into thinking it's human. These bots are also serving as therapists, and even, at least in one case, getting engaged to an actual person. These increasingly lifelike LLMs are both a technological marvel and a conundrum for laws designed to regulate flesh-and-blood people. With growing worries about AI's harms, from emotional manipulation to addictiveness, how do you assign liability to something that seems to have so much autonomy? The anxieties were brought to a head last week when Reuters reported that Meta's internal policies permitted its AI to 'engage a child in conversations that are romantic or sensual.' The revelation triggered a bipartisan furor in Congress, as my colleagues on the Morning Tech team reported today. Sen. Marsha Blackburn (R-Tenn.) said Meta 'has failed miserably' to protect children, and Sen. Ron Wyden (D-Ore.) accused the company of being 'morally and ethically off the rails.' Sen. Josh Hawley (R-Mo.) also launched an investigation into Meta on Friday. The company did not respond to DFD's request for comment. But all these calls for regulation raise the question: Who or what, exactly, do you regulate? It might not seem obvious that a company should be liable for its chatbots – each AI 'personality' adapts its responses based on interactions with a user, so they can act in unpredictable ways. But if you view chatbots as products, rather than synthetic people, the regulatory problem becomes a bit more familiar. Even if a company doesn't have an explicit policy allowing chatbots to engage in unhealthy conversations with children, for example, you can still require safety features to proactively mitigate such behaviors. Ava Smithing, advocacy director at the Young People's Alliance, told DFD, 'It's not about regulating a fake person, it's about regulating the real people who are deciding what that fake person can or cannot say.' Congress hasn't proposed any laws to regulate AI companions. In the meantime, advocates are trying to apply existing product liability laws to restrain these anthropomorphic chatbots. In a landmark case that will set a major precedent in AI law, a Florida family is suing over a chatbot that allegedly formed a sexual relationship with a 14-year-old boy, leading to his suicide. Matthew Bergman, the family's attorney, is tackling AI by adapting product liability strategies he picked up representing asbestos victims. 'Product liability law has the ability to adapt to changing products,' he told DFD. 'A product liability case against an automobile manufacturer in 1950 would look a lot different than the action today.' Bergman is making a novel argument in the suit that intentionally designed its chatbots to be so lifelike that they could emotionally exploit users to get hooked on its service. He's also contending that it was foreseeable that the bots would threaten young users' mental health. A federal judge in Florida rejected bid to dismiss the suit in May. The company declined to comment on the litigation, but told DFD that it has implemented new safety measures for young users. The court held a discovery hearing in the case last week. Youth advocates are similarly leaning on regulations around products to prevent chatbots from harming kids. The YPA filed a complaint with the Federal Trade Commission in January against Replika, accusing the platform of designing its AI romantic partners to manipulate users into spending time and money on the platform. (Replika said the FTC has not lodged any complaints itself, and the agency didn't respond to DFD inquiries.) The YPA has also pushed the Food and Drug Administration to treat therapy chatbots as Class II medical devices, subject to the same safety standards as products like electric wheelchairs. Smithing, of YPA, isn't sure whether these efforts are enough. 'There's so much changing in our agencies and in our government,' she said. 'It's hard to rely on them to act in a timely manner given how urgent what's happening is.' If there are existing laws that could rein in predatory chatbots, do legislators really need to pass more? Despite the longstanding principles undergirding liability laws, Bergman told DFD that new regulations can help set specific standards for what's considered to be a dangerous AI product. He said product liability law 'can provide the stick to incentivize compliance' with those standards. Without a serious effort from Congress, states have been taking the lead on chatbot regulations. New York enacted a law in May requiring an AI companion to send regular reminders that it's not human, and to refer users to crisis centers if they're at risk of hurting themselves. California is considering a bill to prohibit companion chatbots from rewarding young users at unpredictable intervals, a trick that slot machines use to keep gamblers addicted. Lawmakers in Hawaii are also looking at legislation to restrict chatbots that mimic humans for advertising. Common Sense Media, a nonprofit that promotes tech protections for kids, has been backing chatbot bills in California and other states, and is pushing legislators to go further by banning AI companions altogether for young users. Amina Fazlullah, one of its senior policy directors, suggested to DFD that age verification measures for porn could also be applied to companion chatbots. 'AI companions could harm children who are developing their understanding of how to socialize,' she said. Daniel Cochrane, a senior tech research associate at the Heritage Foundation, also supports more chatbot regulation, though cautioned against going too far. He pointed to the European Union's Digital Services Act as being onerous for platforms because it tries to mitigate a ton of ill-defined risks related to social media. 'We ought to be really clear about the harm we're trying to mitigate,' he told DFD. 'I think it's pretty specific: it's children having access to sex bots, essentially.' OpenAI is recruiting Democratic insiders OpenAI is recruiting a team of well-connected Democratic insiders to deal with government scrutiny in California, POLITICO's Christine Mui and Chase DiFeliciantonio report. The company has hired half a dozen veteran operatives over the past year with ties to Democratic establishment figures. Among them is Chief Global Affairs Officer Chris Lehane, who was hired in 2024 and helped then-President Bill Clinton handle the Monica Lewinsky scandal. Debbie Mesloh, a longtime confidant and political consultant for Kamala Harris, joined the company last year as a strategic partnerships lead. Other Silicon Valley companies have focused on recruiting lobbyists with ties to Republican politics in a bid to access the Trump administration. Yet, as Christine and Chase write, OpenAI appears to see Democrat-controlled California as being crucial to its success. The company, which was originally founded as a nonprofit in San Francisco, is facing state regulatory hurdles as it tries to restructure the business to fortify its for-profit arm. Congress frets over Trump's Nvidia deal While lawmakers have decried Trump's export deal with Nvidia, they have few options to actually push back, Politico's Anthony Adragna reports. Trump announced last week that the administration would allow Nvidia and AMD to sell previously restricted AI chips to China in exchange for 15 percent of the revenue. The move drew immediate bipartisan backlash in Congress, with members questioning its constitutionality and impact on national security. Beyond giving the White House a public lashing, there's not a great deal lawmakers can do beside passing a law that the president could veto anyways, and Republicans seem hesitant to press the issue. GOP leaders of the intelligence and foreign affairs committees in the House and Senate have been mum on whether they have concerns, or plan to hold any hearings and investigations regarding the deal. Democrats continue to be vocal and have been sending letters to the president demanding he nix the deal, but they'd need Republican cooperation to get anywhere. post of the day THE FUTURE IN 5 LINKS Stay in touch with the whole team: Aaron Mak (amak@ Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ and Nate Robson (nrobson@