logo
Meta wins AI copyright case  in blow to authors

Meta wins AI copyright case in blow to authors

Irish Times5 hours ago

Meta's
use of millions of books to train its
artificial intelligence
models has been judged 'fair' by a US federal court on Wednesday, in a win for tech companies that use copyrighted materials to develop AI.
The case, brought by about a dozen authors, including Ta-Nehisi Coates and Richard Kadrey, challenged how the $1.4 trillion (€1.2 trillion) social media giant used a library of millions of online books, academic articles and comics to train its Llama AI models.
Meta's use of these titles is protected under copyright law's fair use provision, San Francisco district judge Vince Chhabria ruled. The Big Tech firm had argued that the works had been used to develop a transformative technology, which was fair 'irrespective' of how it acquired the works.
This case is among dozens of legal battles working their way through the courts, as creators seek greater financial rights when their works are used to train AI models that may disrupt their livelihoods – while companies profit from the technology.
READ MORE
However, Mr Chhabria warned that his decision reflected the authors' failure to properly make their case.
'This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful,' he said. 'It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.'
IATA Director General Willie Walsh on airline profits, air fares and why the Dublin Airport passenger cap makes Ireland a laughing stock
Listen |
35:56
It is the second victory in a week for tech groups that develop AI, after a federal judge on Monday ruled in favour of San Francisco start-up Anthropic in a similar case.
Anthropic had trained its Claude models on legally purchased physical books that were cut up and manually scanned, which the ruling said constituted 'fair use'. However, the judge added that there would need to be a separate trial for claims that it pirated millions of books digitally for training.
The Meta case dealt with LibGen, a so-called online shadow library that hosts much of its content without permission from the rights holders.
Mr Chhabria suggested a 'potentially winning argument' in the Meta case would be market dilution, referring to the damage caused to copyright holders by AI products that could 'flood the market with endless amounts of images, songs, articles, books, and more'.
'People can prompt generative AI models to produce these outputs using a tiny fraction of the time and creativity that would otherwise be required,' Chhabria added. He warned AI could 'dramatically undermine the incentive for human beings to create things the old-fashioned way'.
Meta and legal representatives for the authors did not immediately reply to requests for comment. – Copyright The Financial Times Limited 2025

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tennis chiefs lobby gambling firms to close accounts of punters abusing women players
Tennis chiefs lobby gambling firms to close accounts of punters abusing women players

Irish Times

time3 hours ago

  • Irish Times

Tennis chiefs lobby gambling firms to close accounts of punters abusing women players

Tennis authorities are lobbying gambling firms to close the betting accounts of punters found to have sent abusive messages to women players, with one American gaming company this week already warning its customers they will do so. A report from the Women's Tennis Association [WTA] and International Tennis Federation [ITF] published last week revealed that 458 players were direct targets of abuse last year, with the British No 2, Katie Boulter, telling the BBC she has been sent death threats and explicit pictures by online trolls. The report, produced using an AI-led detection system developed by the Signify Group, found that 40 per cent of the social media abuse came from frustrated gamblers, a figure that rose to 77 per cent for direct abuse towards players' personal accounts. More than 8,000 posts sent from 4,200 accounts were identified as abusive, with 26 per cent of the offensive messages directed at five women players. Fifteen of the worst offenders have been reported to the police and other law enforcement agencies, while they have also been banned from buying tickets for Grand Slam tournaments and for ATP and WTA tour events, but there is frustration in tennis at betting companies' apparent reluctance to take action. READ MORE Talks between the WTA, ITF and gambling companies have intensified since the report was published last week however, with some operators indicating a willingness to suspend accounts held by social media abusers. In addition to betting bans, the authorities also want gambling companies to fund industry-wide educational campaigns about online hate. FanDuel, a US-based gambling company, is understood to have altered its terms and conditions earlier this week, giving them the right to suspend customers who harass athletes, as well as broadening the definition of harassment. In an email sent to its database of users on June 24th, FanDuel drew customers' attention to a new clause in its rules, which states: 'We may, in our sole discretion, suspend or terminate your Account and/or exclude you from our services if we determine that you pose a threat to the safety of participants in a sporting event, or discover that you engaged in the harassment of a sports official, coach or any participant in a sporting event.' FanDuel is the WTA's official gaming partner in the United States. Under the terms of that deal the WTA has granted FanDuel use of its official scoring data, giving it a potential timing advantage over competitors, while the gaming company also has the rights to use video highlights on its digital platforms. The British Gambling Commission is also understood to be involved in the discussions on getting more betting companies to ban abusers. One proposal being considered is for the commission's self-exclusion database, a tool designed to ensure operators are not targeting problem gamblers, to be used to list punters who have sent abusive messages to players. A spokesperson for the WTA and ITF said: 'The report has brought about a constructive conversation with the betting industry. We will continue to push for the industry to do more as part of a collective effort to rid tennis of betting-related abuse. We hope the gambling industry responds constructively to our call for more action on their part.' Boulter's revelations last week caused shock and dismay throughout tennis. One message sent to the 28-year-old during the French Open, read: 'Hope you get cancer'. Another told her: 'Go to hell. I lost money my mother sent me'. While a third instructed her to buy 'candles and a coffin for your entire family' and threatened to damage her 'grandmother's grave if she's not dead by tomorrow'. In an interview with the BBC earlier this week the two-times Wimbledon finalist Ons Jabeur called on the betting companies to ban abusers from their platforms for life. 'I feel like we've been talking about this for a long time, but not a lot changes,' she said. 'The big problem is the betting. The betting companies need to vet these people and look at their social media. If they attack players on social media they should be banned from betting for their whole life.' – Guardian

Meta wins AI copyright case  in blow to authors
Meta wins AI copyright case  in blow to authors

Irish Times

time5 hours ago

  • Irish Times

Meta wins AI copyright case in blow to authors

Meta's use of millions of books to train its artificial intelligence models has been judged 'fair' by a US federal court on Wednesday, in a win for tech companies that use copyrighted materials to develop AI. The case, brought by about a dozen authors, including Ta-Nehisi Coates and Richard Kadrey, challenged how the $1.4 trillion (€1.2 trillion) social media giant used a library of millions of online books, academic articles and comics to train its Llama AI models. Meta's use of these titles is protected under copyright law's fair use provision, San Francisco district judge Vince Chhabria ruled. The Big Tech firm had argued that the works had been used to develop a transformative technology, which was fair 'irrespective' of how it acquired the works. This case is among dozens of legal battles working their way through the courts, as creators seek greater financial rights when their works are used to train AI models that may disrupt their livelihoods – while companies profit from the technology. READ MORE However, Mr Chhabria warned that his decision reflected the authors' failure to properly make their case. 'This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful,' he said. 'It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.' IATA Director General Willie Walsh on airline profits, air fares and why the Dublin Airport passenger cap makes Ireland a laughing stock Listen | 35:56 It is the second victory in a week for tech groups that develop AI, after a federal judge on Monday ruled in favour of San Francisco start-up Anthropic in a similar case. Anthropic had trained its Claude models on legally purchased physical books that were cut up and manually scanned, which the ruling said constituted 'fair use'. However, the judge added that there would need to be a separate trial for claims that it pirated millions of books digitally for training. The Meta case dealt with LibGen, a so-called online shadow library that hosts much of its content without permission from the rights holders. Mr Chhabria suggested a 'potentially winning argument' in the Meta case would be market dilution, referring to the damage caused to copyright holders by AI products that could 'flood the market with endless amounts of images, songs, articles, books, and more'. 'People can prompt generative AI models to produce these outputs using a tiny fraction of the time and creativity that would otherwise be required,' Chhabria added. He warned AI could 'dramatically undermine the incentive for human beings to create things the old-fashioned way'. Meta and legal representatives for the authors did not immediately reply to requests for comment. – Copyright The Financial Times Limited 2025

Widespread concerns among bank staff over AI
Widespread concerns among bank staff over AI

RTÉ News​

time9 hours ago

  • RTÉ News​

Widespread concerns among bank staff over AI

There is widespread concern among staff in the financial services sector over the possible effects of artificial intelligence (AI), according to a new survey. The research was conducted by the Financial Services Union (FSU) and the think tank TASC. It shows that job displacement, lack of reskilling opportunities and bias in decision-making are among the top concerns for workers. The report examined both the opportunities and challenges posed by AI and found that 88% of respondents believe AI will lead to job displacement, while 60% report feeling less secure in their roles than they did five years ago. Over 61% of respondents expressed unease about AI being used in hiring, firing, and promotion decisions. More than half of workers said they are concerned about increased managerial oversight and surveillance through AI systems, fearing a loss of privacy and greater performance monitoring. Despite these concerns, some workers recognised AI's positive impacts. Around 45% of respondents said they feel AI may lead to less time spent on administrative tasks and 30% feel it may improve data analytics. "The use of artificial intelligence is expanding at an alarming rate across the financial services sector, and it is incumbent on all key stakeholders to ensure AI is used for the benefit of workers and consumers," said FSU General Secretary John O'Connell. "The FSU has successfully concluded an AI agreement with Bank of Ireland which commits the bank to collectively bargain any changes that may occur due to the expansion of AI," Mr O'Connell added. Molly Newell, researcher at TASC, said that without clear commitments to equity, inclusion, and transparency, the widespread adoption of AI in financial services risks deepening existing inequalities. "We must ensure this technology serves the common good - strengthening, rather than undermining, social and economic cohesion," Ms Newell said. The Financial Services Union surveyed 604 employees, 602 of whom were FSU members. On Monday, the Chief Executive of AIB Colin Hunt took part in a panel discussion at a Bloomberg event in Dublin. Asked what impact AI will have on staffing numbers at the bank over the next five years, Mr Hunt said it may lead to a small reduction in net headcount. "I do think that there are certain manual processes that we do now that will be done by AI in the future, and probably net headcount will be broadly stable with a slight downward bias maybe," Mr Hunt said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store