EleutherAI, a well-known nonprofit in the AI research space, has unveiled what it describes as one of the most extensive collections of licensed and public domain text for training artificial intelligence models.
The newly released dataset, named Common Pile v0.1, is the result of nearly two years of collaborative work. The project brought together AI startups such as Hugging Face and Poolside, along with multiple academic institutions. With a total size of 8 terabytes, the dataset was instrumental in developing EleutherAI’s new language models, Comma v0.1-1T and Comma v0.1-2T. The organization claims these models can match the performance of those trained on copyrighted or proprietary content.
The timing of the release is notable, as many leading AI firms currently face lawsuits over their use of copyrighted material in training data. While some companies have negotiated licenses with content creators, most argue their actions fall under “fair use,” a legal gray area that remains hotly contested. In the face of these lawsuits, many companies have pulled back on transparency, limiting access to research data and methodology.
EleutherAI’s executive director, Stella Biderman, emphasized that legal disputes have made open AI research more difficult. In a post on Hugging Face, she noted that some researchers have stopped publishing data-related work altogether due to legal concerns. This, she argues, has negatively impacted the broader AI research community.
In contrast, the Common Pile v0.1 aims to demonstrate that powerful AI models can be built using data that is either licensed or in the public domain. The dataset includes hundreds of thousands of digitized public books from sources like the Library of Congress and the Internet Archive. It also incorporates audio transcriptions generated using OpenAI’s Whisper model.
Both of EleutherAI’s new models were trained using just a portion of the dataset and consist of 7 billion parameters each. Despite this, they reportedly rival Meta’s original LLaMA model in areas like coding, mathematical reasoning, and visual understanding.
Biderman stated that the belief that high-performing models require unlicensed data is becoming outdated. As more open data becomes available, she expects even stronger AI models can be trained while respecting copyright laws.
EleutherAI has committed to more frequent releases of open, legally vetted datasets with the help of research institutions like the University of Toronto.