logo
#

Latest news with #USCopyrightAct

Llama Group: Jamendo Reaffirms Commitment to Legal Action Over Unauthorized Use of Its Music by Nvidia and Suno
Llama Group: Jamendo Reaffirms Commitment to Legal Action Over Unauthorized Use of Its Music by Nvidia and Suno

Business Wire

time01-07-2025

  • Business
  • Business Wire

Llama Group: Jamendo Reaffirms Commitment to Legal Action Over Unauthorized Use of Its Music by Nvidia and Suno

BRUSSELS--(BUSINESS WIRE)--Regulatory News: Llama Group (Paris: ALLAM) (Brussels: ALLAM): Jamendo, the global music platform and part of the Winamp family, is reaffirming its firm stance in the ongoing dispute with Nvidia over the alleged unauthorized use of its music catalog to train the AI model Fugatto, as well as with Suno regarding the use of the same catalog in training the SunoAI Foundation Model. No agreement has been reached with Nvidia, despite multiple attempts to resolve the matter amicably and in good faith, including a formal licensing proposal. In contrast, Suno has failed to respond entirely, ignoring Jamendo's repeated efforts to establish communication. Jamendo has no intention of backing down and is determined to pursue the necessary legal procedures to ensure its rights — and those of the more than 70,000 independent artists she represents — are fully respected and enforced. The team is proceeding as planned to initiate legal action. 'Our music catalog is not free for exploitation by commercial entities building AI models without permission or compensation,' said Alexandre Saboundjian, CEO of Jamendo and Winamp. 'Nvidia and Suno's use of our artists' work without authorization is not only unlawful, it is a direct threat to the livelihoods of independent musicians worldwide. We will not stand idly by. As an example, under the US Copyright Act, violations of this nature are subject to statutory damages ranging from $750 to $150,000 per infringed track.' Jamendo had proposed a fair and standard retroactive license agreement for the 51,000 tracks allegedly used in the training of Nvidia's AI program, based on its established licensing rate for AI training purposes. This offer remains unmatched in its reasonableness and reflects the same terms accepted by other reputable technology companies. The platform reiterates that it stands ready to defend its rights, and those of its artist community. Jamendo remains committed to transparency and to defending the legal and moral rights of artists in the evolving digital landscape. Jamendo calls on all stakeholders to respect creators' work and engage in licensing discussions rather than infringe upon the fundamental principles of copyright law. Next Meeting About Jamendo – Jamendo is all about connecting musicians and music lovers from all over the world. Our goal is to bring together a worldwide community of independent music, creating experience and value around it. Jamendo offers the perfect platform for all independent artists wishing to share their creations as easily as possible, reaching new audiences internationally. About Winamp – Winamp is redefining the music experience by creating an innovative platform that strengthens the connection between artists and fans. We provide powerful tools that empower creators to manage their music, grow their audience, and maximize their revenue—all while delivering a seamless listening experience through the Winamp Player. Winamp for Creators is our dedicated platform designed to give music artists everything they need to succeed. From monetization tools to music management services, it brings together essential resources to help creators take control of their careers. About Llama Group Llama Group is a pioneer and leader in the digital music industry. With extensive expertise across various sectors, the group owns the iconic Winamp platform, the Bridger copyright management company, and the Jamendo music licensing company. Llama Group's ambition is to build the future of the music industry through sustained investment in a range of innovative solutions and in the talent and skills of people who love music. The group stands by its brand values: empowerment, access, simplicity, and fairness. Winamp's vision is a world where a cutting-edge music platform connects artists and their fans like never before. Bridger's mission is to support songwriters and composers by providing a simple and innovative solution for collecting royalties. Jamendo enables independent artists to generate additional income through commercial licenses. Finally, Hotmix offers a bouquet of more than sixty thematic and free digital radio stations.

US judge sides with Meta in AI training copyright case
US judge sides with Meta in AI training copyright case

eNCA

time26-06-2025

  • Business
  • eNCA

US judge sides with Meta in AI training copyright case

WASHINGTON - A US judge on Wednesday handed Meta a victory over authors who accused the tech giant of violating copyright law by training Llama artificial intelligence on their creations without permission. District Court Judge Vince Chhabria in San Francisco ruled that Meta's use of the works to train its AI model was "transformative" enough to constitute "fair use" under copyright law, in the second such courtroom triumph for AI firms this week. However, it came with a caveat that the authors could have pitched a winning argument that by training powerful generative AI with copyrighted works, tech firms are creating a tool that could let a sea of users compete with them in the literary marketplace. "No matter how transformative (generative AI) training may be, it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books," Chhabria said in his ruling. Tremendous amounts of data are needed to train large language models powering generative AI. Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment. AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation. "We appreciate today's decision from the court," a Meta spokesperson said in response to an AFP inquiry. "Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology." In the case before Chhabria, a group of authors sued Meta for downloading pirated copies of their works and using them to train the open-source Llama generative AI, according to court documents. Books involved in the suit include Sarah Silverman's comic memoir "The Bedwetter" and Junot Diaz's Pulitzer Prize–winning novel "The Brief Wondrous Life of Oscar Wao," the documents showed. "This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful," the judge stated. "It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one." - Market harming? - A different federal judge in San Franciso on Monday sided with AI firm Anthropic regarding training its models on copyrighted books without authors' permission. District Court Judge William Alsup ruled that the company's training of its Claude AI models with books bought or pirated was allowed under the "fair use" doctrine in the US Copyright Act. "Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use," Alsup wrote in his decision. "The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup added in his decision, comparing AI training to how humans learn by reading books. The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train chatbot Claude, the company's ChatGPT rival. Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections. gc/jgc

US judge: AI training on books is fair use, partly
US judge: AI training on books is fair use, partly

The Sun

time25-06-2025

  • Business
  • The Sun

US judge: AI training on books is fair use, partly

SAN FRANCISCO (United States): A US federal judge has sided with Anthropic regarding training its artificial intelligence models on copyrighted books without authors' permission, a decision with the potential to set a major legal precedent in AI deployment. District Court Judge William Alsup ruled on Monday that the company's training of its Claude AI models with books bought or pirated was allowed under the 'fair use' doctrine in the US Copyright Act. 'Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use,' Alsup wrote in his decision. 'The technology at issue was among the most transformative many of us will see in our lifetimes,' Alsup added in his 32-page decision, comparing AI training to how humans learn by reading books. Tremendous amounts of data are needed to train large language models powering generative AI. Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment. AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation. 'We are pleased that the court recognized that using 'works to train LLMs was transformative,'' an Anthropic spokesperson said in response to an AFP query. The judge's decision is 'consistent with copyright's purpose in enabling creativity and fostering scientific progress,' the spokesperson added. - Blanket protection rejected - The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train Claude, the company's AI chatbot that rivals ChatGPT. However, Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections. Along with downloading books from websites offering pirated works, Anthropic bought copyrighted books, scanned the pages and stored them in digital formats, according to court documents. Anthropic's aim was to amass a library of 'all the books in the world' for training AI models on content as deemed fit, the judge said in his ruling. While training AI models on the pirated content posed no legal violation, downloading pirated copies to build a general-purpose library constituted copyright infringement, the judge ruled, regardless of eventual training use. The case will now proceed to trial on damages related to the pirated library copies, with potential penalties including financial damages. Anthropic said it disagreed with going to trial on this part of the decision and was evaluating its legal options. 'Judge Alsup's decision is a mixed bag,' said Keith Kupferschmid, chief executive of US nonprofit Copyright Alliance. 'In some instances AI companies should be happy with the decision and in other instances copyright owners should be happy.' Valued at $61.5 billion and heavily backed by Amazon, Anthropic was founded in 2021 by former OpenAI executives. The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.

US judge backs using copyrighted books to train AI
US judge backs using copyrighted books to train AI

Straits Times

time25-06-2025

  • Business
  • Straits Times

US judge backs using copyrighted books to train AI

Tremendous amounts of data are needed to train large language models powering generative AI. PHOTO: REUTERS US judge backs using copyrighted books to train AI SAN FRANCISCO - A US federal judge has sided with Anthropic regarding training its artificial intelligence models on copyrighted books without authors' permission, a decision with the potential to set a major legal precedent in AI deployment. District Court Judge William Alsup ruled on June 23 that the company's training of its Claude AI models with books bought or pirated was allowed under the 'fair use' doctrine in the US Copyright Act. 'Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use,' Mr Alsup wrote in his decision. 'The technology at issue was among the most transformative many of us will see in our lifetimes,' Mr Alsup added in his 32-page decision, comparing AI training to how humans learn by reading books. Tremendous amounts of data are needed to train large language models powering generative AI. Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment. AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation. 'We are pleased that the court recognised that using 'works to train LLMs was transformative,'' an Anthropic spokesperson said in response to an AFP query. The judge's decision is 'consistent with copyright's purpose in enabling creativity and fostering scientific progress,' the spokesperson added. Blanket protection rejected The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train Claude, the company's AI chatbot that rivals ChatGPT. However, Mr Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections. Along with downloading books from websites offering pirated works, Anthropic bought copyrighted books, scanned the pages and stored them in digital formats, according to court documents. Anthropic's aim was to amass a library of 'all the books in the world' for training AI models on content as deemed fit, the judge said in his ruling. While training AI models on the pirated content posed no legal violation, downloading pirated copies to build a general-purpose library constituted copyright infringement, the judge ruled, regardless of eventual training use. The case will now proceed to trial on damages related to the pirated library copies, with potential penalties including financial damages. Anthropic said it disagreed with going to trial on this part of the decision and was evaluating its legal options. 'Judge Alsup's decision is a mixed bag,' said Mr Keith Kupferschmid, chief executive of US non-profit Copyright Alliance. 'In some instances AI companies should be happy with the decision and in other instances copyright owners should be happy.' Valued at US$61.5 billion (S$78.7 billion) and heavily backed by Amazon, Anthropic was founded in 2021 by former OpenAI executives. The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development. AFP Join ST's Telegram channel and get the latest breaking news delivered to you.

US Judge Backs Using Copyrighted Books To Train AI
US Judge Backs Using Copyrighted Books To Train AI

Int'l Business Times

time24-06-2025

  • Business
  • Int'l Business Times

US Judge Backs Using Copyrighted Books To Train AI

A US federal judge has sided with Anthropic regarding training its artificial intelligence models on copyrighted books without authors' permission, a decision with the potential to set a major legal precedent in AI deployment. District Court Judge William Alsup ruled on Monday that the company's training of its Claude AI models with books bought or pirated was allowed under the "fair use" doctrine in the US Copyright Act. "Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use," Alsup wrote in his decision. "The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup added in his 32-page decision, comparing AI training to how humans learn by reading books. Tremendous amounts of data are needed to train large language models powering generative AI. Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment. AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation. "We are pleased that the court recognized that using 'works to train LLMs was transformative,'" an Anthropic spokesperson said in response to an AFP query. The judge's decision is "consistent with copyright's purpose in enabling creativity and fostering scientific progress," the spokesperson added. The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train Claude, the company's AI chatbot that rivals ChatGPT. However, Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections. Along with downloading of books from websites offering pirated works, Anthropic bought copyrighted books, scanned the pages and stored them in digital format, according to court documents. Anthropic's aim was to amass a library of "all the books in the world" for training AI models on content as deemed fit, the judge said in his ruling. While training AI models on the pirated content posed no legal violation, downloading pirated copies to build a general-purpose library constituted copyright infringement, regardless of eventual training use. The case will now proceed to trial on damages related to the pirated library copies, with potential penalties including financial damages. Anthropic said it disagreed with going to trial on this part of the decision and was evaluating its legal options. Valued at $61.5 billion and heavily backed by Amazon, Anthropic was founded in 2021 by former OpenAI executives. The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store