logo
#

Latest news with #StellaBiderman

EleutherAI releases massive AI training dataset of licensed and open domain text
EleutherAI releases massive AI training dataset of licensed and open domain text

Yahoo

time4 days ago

  • Business
  • Yahoo

EleutherAI releases massive AI training dataset of licensed and open domain text

EleutherAI, an AI research organization, has released what it claims is one of the largest collections of licensed and open-domain text for training AI models. The dataset, called The Common Pile v0.1, took around two years to complete in collaboration with AI startups Poolside, Hugging Face, and others, along with several academic institutions. Weighing in at 8 terabytes in size, The Common Pile v0.1 was used to train two new AI models from EleutherAI, Comma v0.1-1T and Comma v0.1-2T, that EleutherAI claims perform on par with models developed using unlicensed, copyrighted data. AI companies, including OpenAI, are embroiled in lawsuits over their AI training practices, which rely on scraping the web — including copyrighted material like books and research journals — to build model training datasets. While some AI companies have licensing arrangements in place with certain content providers, most maintain that the U.S. legal doctrine of fair use shields them from liability in cases where they trained on copyrighted work without permission. EleutherAI argues that these lawsuits have "drastically decreased" transparency from AI companies, which the organization says has harmed the broader AI research field by making it more difficult to understand how models work and what their flaws might be. "[Copyright] lawsuits have not meaningfully changed data sourcing practices in [model] training, but they have drastically decreased the transparency companies engage in," Stella Biderman, EleutherAI's executive director, wrote in a blog post on Hugging Face early Friday. "Researchers at some companies we have spoken to have also specifically cited lawsuits as the reason why they've been unable to release the research they're doing in highly data-centric areas." The Common Pile v0.1, which can be downloaded from Hugging Face's AI dev platform and GitHub, was created in consultation with legal experts, and it draws on sources, including 300,000 public domain books digitized by the Library of Congress and the Internet Archive. EleutherAI also used Whisper, OpenAI's open source speech-to-text model, to transcribe audio content. EleutherAI claims Comma v0.1-1T and Comma v0.1-2T are evidence that the Common Pile v0.1 was curated carefully enough to enable developers to build models competitive with proprietary alternatives. According to EleutherAI, the models, both of which are 7 billion parameters in size and were trained on only a fraction of the Common Pile v0.1, rival models like Meta's first Llama AI model on benchmarks for coding, image understanding, and math. Parameters, sometimes referred to as weights, are the internal components of an AI model that guide its behavior and answers. "In general, we think that the common idea that unlicensed text drives performance is unjustified," Biderman wrote in her post. "As the amount of accessible openly licensed and public domain data grows, we can expect the quality of models trained on openly licensed content to improve." The Common Pile v0.1 appears to be in part an effort to right EleutherAI's historical wrongs. Years ago, the company released The Pile, an open collection of training text that includes copyrighted material. AI companies have come under fire — and legal pressure — for using The Pile to train models. EleutherAI is committing to releasing open datasets more frequently going forward in collaboration with its research and infrastructure partners. This article originally appeared on TechCrunch at Sign in to access your portfolio

EleutherAI releases massive AI training dataset of licensed and open domain text
EleutherAI releases massive AI training dataset of licensed and open domain text

Yahoo

time4 days ago

  • Business
  • Yahoo

EleutherAI releases massive AI training dataset of licensed and open domain text

EleutherAI, an AI research organization, has released what it claims is one of the largest collections of licensed and open-domain text for training AI models. The dataset, called The Common Pile v0.1, took around two years to complete in collaboration with AI startups Poolside, Hugging Face, and others, along with several academic institutions. Weighing in at 8 terabytes in size, The Common Pile v0.1 was used to train two new AI models from EleutherAI, Comma v0.1-1T and Comma v0.1-2T, that EleutherAI claims perform on par with models developed using unlicensed, copyrighted data. AI companies, including OpenAI, are embroiled in lawsuits over their AI training practices, which rely on scraping the web — including copyrighted material like books and research journals — to build model training datasets. While some AI companies have licensing arrangements in place with certain content providers, most maintain that the U.S. legal doctrine of fair use shields them from liability in cases where they trained on copyrighted work without permission. EleutherAI argues that these lawsuits have "drastically decreased" transparency from AI companies, which the organization says has harmed the broader AI research field by making it more difficult to understand how models work and what their flaws might be. "[Copyright] lawsuits have not meaningfully changed data sourcing practices in [model] training, but they have drastically decreased the transparency companies engage in," Stella Biderman, EleutherAI's executive director, wrote in a blog post on Hugging Face early Friday. "Researchers at some companies we have spoken to have also specifically cited lawsuits as the reason why they've been unable to release the research they're doing in highly data-centric areas." The Common Pile v0.1, which can be downloaded from Hugging Face's AI dev platform and GitHub, was created in consultation with legal experts, and it draws on sources, including 300,000 public domain books digitized by the Library of Congress and the Internet Archive. EleutherAI also used Whisper, OpenAI's open source speech-to-text model, to transcribe audio content. EleutherAI claims Comma v0.1-1T and Comma v0.1-2T are evidence that the Common Pile v0.1 was curated carefully enough to enable developers to build models competitive with proprietary alternatives. According to EleutherAI, the models, both of which are 7 billion parameters in size and were trained on only a fraction of the Common Pile v0.1, rival models like Meta's first Llama AI model on benchmarks for coding, image understanding, and math. Parameters, sometimes referred to as weights, are the internal components of an AI model that guide its behavior and answers. "In general, we think that the common idea that unlicensed text drives performance is unjustified," Biderman wrote in her post. "As the amount of accessible openly licensed and public domain data grows, we can expect the quality of models trained on openly licensed content to improve." The Common Pile v0.1 appears to be in part an effort to right EleutherAI's historical wrongs. Years ago, the company released The Pile, an open collection of training text that includes copyrighted material. AI companies have come under fire — and legal pressure — for using The Pile to train models. EleutherAI is committing to releasing open datasets more frequently going forward in collaboration with its research and infrastructure partners. Error in retrieving data Sign in to access your portfolio Error in retrieving data

EleutherAI releases massive AI training dataset of licensed and open domain text
EleutherAI releases massive AI training dataset of licensed and open domain text

TechCrunch

time4 days ago

  • Business
  • TechCrunch

EleutherAI releases massive AI training dataset of licensed and open domain text

EleutherAI, an AI research organization, has released what it claims is one of the largest collections of licensed and open-domain text for training AI models. The dataset, called The Common Pile v0.1, took around two years to complete in collaboration with AI startups Poolside, Hugging Face, and others, along with several academic institutions. Weighing in at 8 terabytes in size, The Common Pile v0.1 was used to train two new AI models from EleutherAI, Comma v0.1-1T and Comma v0.1-2T, that EleutherAI claims perform on par with models developed using unlicensed, copyrighted data. AI companies, including OpenAI, are embroiled in lawsuits over their AI training practices, which rely on scraping the web — including copyrighted material like books and research journals — to build model training datasets. While some AI companies have licensing arrangements in place with certain content providers, most maintain that the U.S. legal doctrine of fair use shields them from liability in cases where they trained on copyrighted work without permission. EleutherAI argues that these lawsuits have 'drastically decreased' transparency from AI companies, which the organization says has harmed the broader AI research field by making it more difficult to understand how models work and what their flaws might be. '[Copyright] lawsuits have not meaningfully changed data sourcing practices in [model] training, but they have drastically decreased the transparency companies engage in,' Stella Biderman, EleutherAI's executive director, wrote in a blog post on Hugging Face early Friday. 'Researchers at some companies we have spoken to have also specifically cited lawsuits as the reason why they've been unable to release the research they're doing in highly data-centric areas.' The Common Pile v0.1, which can be downloaded from Hugging Face's AI dev platform and GitHub, was created in consultation with legal experts, and it draws on sources including 300,000 public domain books digitized by the Library of Congress and the Internet Archive. EleutherAI also used Whisper, OpenAI's open-source speech-to-text model, to transcribe audio content. EleutherAI claims Comma v0.1-1T and Comma v0.1-2T are evidence that the Common Pile v0.1 was curated carefully enough to enable developers to build models competitive with proprietary alternatives. According to EleutherAI, the models, both of which are 7 billion parameters in size and were trained on only a fraction of the Common Pile v0.1, rival models like Meta's first Llama AI model on benchmarks for coding, image understanding, and math. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Parameters, sometimes referred to as weights, are the internal components of an AI model that guide its behavior and answers. 'In general, we think that the common idea that unlicensed text drives performance is unjustified,' Biderman wrote in her post. 'As the amount of accessible openly licensed and public domain data grows, we can expect the quality of models trained on openly licensed content to improve.' The Common Pile v0.1 appears to be in part an effort to right EleutherAI's historical wrongs. Years ago, the company released The Pile, an open collection of training text that includes copyrighted material. AI companies have come under fire — and legal pressure — for using The Pile to train models. EleutherAI is committing to releasing open datasets more frequently going forward in collaboration with its research and infrastructure partners.

It turns out you can train AI models without copyrighted material
It turns out you can train AI models without copyrighted material

Engadget

time5 days ago

  • Business
  • Engadget

It turns out you can train AI models without copyrighted material

AI companies claim their tools couldn't exist without training on copyrighted material. It turns out, they could — it's just really hard. To prove it, AI researchers trained a new model that's less powerful but much more ethical. That's because the LLM's dataset uses only public domain and openly licensed material. The paper (via The Washington Post ) was a collaboration between 14 different institutions. The authors represent universities like MIT, Carnegie Mellon and the University of Toronto. Nonprofits like Vector Institute and the Allen Institute for AI also contributed. The group built an 8 TB ethically-sourced dataset. Among the data was a set of 130,000 books in the Library of Congress. After inputting the material, they trained a seven-billion-parameter large language model (LLM) on that data. The result? It performed about as well as Meta's similarly sized Llama 2-7B from 2023. The team didn't publish benchmarks comparing its results to today's top models. Performance comparable to a two-year-old model wasn't the only downside. The process of putting it all together was also a grind. Much of the data couldn't be read by machines, so humans had to sift through it. "We use automated tools, but all of our stuff was manually annotated at the end of the day and checked by people," co-author Stella Biderman told WaPo . "And that's just really hard." Figuring out the legal details also made the process hard. The team had to determine which license applied to each website they scanned. So, what do you do with a less powerful LLM that's much harder to train? If nothing else, it can serve as a counterpoint. In 2024, OpenAI told a British parliamentary committee that such a model essentially couldn't exist. The company claimed it would be "impossible to train today's leading AI models without using copyrighted materials." Last year, an Anthropic expert witness added, "LLMs would likely not exist if AI firms were required to license the works in their training datasets." Of course, this study won't change the trajectory of AI companies. After all, more work to create less powerful tools doesn't jive with their interests. But at least it punctures one of the industry's common arguments. Don't be surprised if you hear about this study again in legal cases and regulation arguments.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store