• Stars
    star
    1,375
  • Rank 33,011 (Top 0.7 %)
  • Language
    Python
  • License
    MIT License
  • Created over 3 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

The Pile Replication Code

The official website for the the Pile is here.

The Pile is a large, diverse, open source language modelling data set that consists of many smaller datasets combined together. The objective is to obtain text from as many modalities as possible to ensure that models trained using The Pile will have much broader generalization abilities.

This repository is for replicating or making variants of the Pile. IF YOU ARE HERE TO USE THE PILE DATASET, THIS REPO IS PROBABLY NOT WHAT YOU ARE LOOKING FOR. A copy of the Pile can be downloaded here.

Component Raw Size Weight Epochs Effective Size Mean Document Size
Pile-CC 227.12 GiB 18.11% 1.0 227.12 GiB 4.33 KiB
PubMed Central 90.27 GiB 14.40% 2.0 180.55 GiB 30.55 KiB
Books3 100.96 GiB 12.07% 1.5 151.44 GiB 538.36 KiB
OpenWebText2 62.77 GiB 10.01% 2.0 125.54 GiB 3.85 KiB
ArXiv 56.21 GiB 8.96% 2.0 112.42 GiB 46.61 KiB
Github 95.16 GiB 7.59% 1.0 95.16 GiB 5.25 KiB
FreeLaw 51.15 GiB 6.12% 1.5 76.73 GiB 15.06 KiB
StackExchange 32.20 GiB 5.13% 2.0 64.39 GiB 2.16 KiB
USPTO Backgrounds 22.90 GiB 3.65% 2.0 45.81 GiB 4.08 KiB
PubMed Abstracts 19.26 GiB 3.07% 2.0 38.53 GiB 1.30 KiB
Gutenberg (PG-19) 10.88 GiB 2.17% 2.5 27.19 GiB 398.73 KiB
OpenSubtitles 12.98 GiB 1.55% 1.5 19.47 GiB 30.48 KiB
Wikipedia (en) 6.38 GiB 1.53% 3.0 19.13 GiB 1.11 KiB
DM Mathematics 7.75 GiB 1.24% 2.0 15.49 GiB 8.00 KiB
Ubuntu IRC 5.52 GiB 0.88% 2.0 11.03 GiB 545.48 KiB
BookCorpus2 6.30 GiB 0.75% 1.5 9.45 GiB 369.87 KiB
EuroParl 4.59 GiB 0.73% 2.0 9.17 GiB 68.87 KiB
HackerNews 3.90 GiB 0.62% 2.0 7.80 GiB 4.92 KiB
YoutubeSubtitles 3.73 GiB 0.60% 2.0 7.47 GiB 22.55 KiB
PhilPapers 2.38 GiB 0.38% 2.0 4.76 GiB 73.37 KiB
NIH ExPorter 1.89 GiB 0.30% 2.0 3.79 GiB 2.11 KiB
Enron Emails 0.88 GiB 0.14% 2.0 1.76 GiB 1.78 KiB
Total 1254.20 GiB 5.91 KiB

(Epochs refers to the number of epochs elapsed after 1.2TB)

Usage

Install:

pip install -e .

To replicate pile

python the_pile/pile.py --interleave_output 30 --using pile_reprod

Use the pass 2 script here to complete shuffling.

Other

To force download all data:

python the_pile/pile.py --force_download

To generate fasttext training data for CC filtering (OWT2 only):

sudo apt install build-essential
python the_pile/pile.py --using owt2 --make_fasttext 

Manual Download Components

The following components need manual downloading. Either download them or comment out from pile.py.

  • Bibliotik: books3.tar.gz needs to be in the current directory. Download temporarily unavailable.

Workflow

To propose a new dataset be added to the Pile, open an issue. Your issue should include a description of the dataset, its size, what language(s) it is in, a link to the data, and any other relevant information. If a project manger approves your proposal, they will change its label to Datasets and add it to Project: Datasets. Datasets that we elect to not include in the current version of the Pile will receive a Deferred or Declined label. While we welcome multilingual datasets and plan on including non-English datasets in the future, the initial release of the Pile will be English-only and all submissions of non-English datasets will be deferred.

To claim responsibility for implementing an unclaimed dataset, leave a comment on one of our unassigned issues. Once an dataset has been assigned to you, make the necessary changes to datsets.py and pile.py in a fork and submit a pull request. If you require, you can also submit a script for processing the data as shown here.

To raise an issue that is not proposing a new dataset, open an issue with the tag Feature Request or Bug as appropriate.

Data ready for final implementation should meet the following criteria:

  • The data must be in lm_dataformat format.
  • The data must be shuffled.

In preparation for the initial release, we are no longer accepting additions to the master branch. If you would like to contribute a dataset, please submit the pull request to the Version2 branch.

More Repositories

1

gpt-neo

An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
Python
8,150
star
2

gpt-neox

An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
Python
6,467
star
3

lm-evaluation-harness

A framework for few-shot evaluation of language models.
Python
5,205
star
4

pythia

The hub for EleutherAI's work on interpretability and learning dynamics
Jupyter Notebook
1,935
star
5

math-lm

Python
975
star
6

polyglot

Polyglot: Large Language Models of Well-balanced Competence in Multi-languages
460
star
7

DALLE-mtf

Open-AI's DALL-E for large scale training in mesh-tensorflow.
Python
436
star
8

vqgan-clip

Jupyter Notebook
339
star
9

concept-erasure

Erasing concepts from neural representations with provable guarantees
Python
186
star
10

elk

Keeping language models honest by directly eliciting knowledge encoded in their activations.
Python
171
star
11

oslo

OSLO: Open Source for Large-scale Optimization
Python
170
star
12

lm_perplexity

Python
137
star
13

knowledge-neurons

A library for finding knowledge neurons in pretrained transformer models.
Python
130
star
14

cookbook

Deep learning for dummies. All the practical details and useful utilities that go into working with real models.
Python
123
star
15

pyfra

Python Research Framework
Python
107
star
16

dps

Data processing system for polyglot
Python
83
star
17

openwebtext2

Python
81
star
18

info

(Deprecated) A hub for onboarding & other information.
78
star
19

project-menu

See the issue board for the current status of active and prospective projects!
65
star
20

stackexchange-dataset

Python tools for processing the stackexchange data dumps into a text dataset for Language Models
Python
64
star
21

magiCARP

One stop shop for all things carp
Python
58
star
22

tqdm-multiprocess

Using queues, tqdm-multiprocess supports multiple worker processes, each with multiple tqdm progress bars, displaying them cleanly through the main process. It offers similar functionality for python logging.
Python
41
star
23

aria

Python
36
star
24

semantic-memorization

Jupyter Notebook
34
star
25

hae-rae

30
star
26

improved-t5

Experiments for efforts to train a new and improved t5
Python
26
star
27

features-across-time

Understanding how features learned by neural networks evolve throughout training
Python
25
star
28

mp_nerf

Massively-Parallel Natural Extension of Reference Frame
Jupyter Notebook
25
star
29

pile-pubmedcentral

A script for collecting the PubMed Central dataset in a language modelling friendly format.
Python
20
star
30

best-download

URL downloader supporting checkpointing and continuous checksumming.
Python
19
star
31

polyglot-data

data related codebase for polyglot project
Python
19
star
32

elk-generalization

Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from easy questions to hard
Jupyter Notebook
19
star
33

text-generation-testing-ui

Web app for demoing the EAI models
JavaScript
16
star
34

exploring-contrastive-topology

Jupyter Notebook
16
star
35

rnngineering

Engineering the state of RNN language models (Mamba, RWKV, etc.)
Jupyter Notebook
16
star
36

mdl

Minimum Description Length probing for neural network representations
Python
14
star
37

pile_dedupe

Pile Deduplication Code
Python
14
star
38

pilev2

Python
13
star
39

distilling

Experiments with distilling large language models.
Python
13
star
40

lm-eval2

Python
11
star
41

equivariance

A framework for implementing equivariant DL
Jupyter Notebook
10
star
42

radioactive-lab

Adapting the "Radioactive Data" paper to work for text models
Python
9
star
43

tagged-pile

Part-of-Speech Tagging for the Pile and RedPajama
Python
9
star
44

pile-literotica

Download, parse, and filter data from Literotica. Data-ready for The-Pile.
Python
8
star
45

hn-scraper

Python
8
star
46

multimodal-fid

Python
7
star
47

trlx

A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
Python
7
star
48

pile-cc-filtering

The code used to filter CC data for The Pile
Python
6
star
49

minetest-baselines

Baseline agents for Minetest tasks.
Python
6
star
50

CodeCARP

Data collection pipeline for CodeCARP. Includes PyCharm plugins.
6
star
51

LLM-Markov-Chains

Project github for LLM Markov Chains Project
6
star
52

pile-uspto

A script for collecting the USPTO Backgrounds dataset in a language modelling friendly format.
Python
6
star
53

thonkenizers

yes
5
star
54

minetest-interpretabilty-notebook

Jupyter notebook for the interpretablity section of the minetester blog post
Jupyter Notebook
5
star
55

visual-grounding

Visually ground GPT-Neo 1.3b and 2.7b
Python
5
star
56

Unpaired-Image-Generation

Project Repo for Unpaired Image Generation project
5
star
57

pile-enron-emails

A script for collecting the Enron Emails dataset in a language modelling friendly format.
Python
5
star
58

architecture-experiments

Repository to host architecture experiments and development using Paxml and Praxis
Python
5
star
59

llemma-sample-explorer

Sample explorer tool for the Llemma models.
HTML
5
star
60

pile-explorer

For exploring the data and documenting its limitations
Python
4
star
61

lm-scope

Jupyter Notebook
4
star
62

megatron-3d

Python
4
star
63

tokengrams

Efficiently computing & storing token n-grams from large corpora
Rust
4
star
64

ccs

Python
4
star
65

latent-video-diffusion

Latent video diffusion
Python
3
star
66

eleutherai-instruct-dataset

A large instruct dataset for open-source models (WIP).
3
star
67

isaac-mchorse

EleutherAI's discord bot
Python
3
star
68

pile-allpoetry

Scraper to gather poems from allpoetry.com
Python
3
star
69

eai-prompt-gallery

Library of interesting prompt generations
JavaScript
3
star
70

eleutherai.github.io

This is the Hugo generated website for eleuther.ai. The source of this build is new-website repo.
HTML
3
star
71

website

New website for EleutherAI based on Hugo static site generator
HTML
3
star
72

variance-across-time

Studying the variance in neural net predictions across training time
Python
3
star
73

pile-ubuntu-irc

A script for collecting the Ubuntu IRC dataset in a language modelling friendly format.
Python
3
star
74

aria-amt

MIDI conditioned automatic music transcription
Jupyter Notebook
3
star
75

reddit-comment-processing

Python
2
star
76

language-adaptation

2
star
77

EvilModel

A replication of "EvilModel 2.0: Bringing Neural Network Models into Malware Attacks"
2
star
78

bucket-cleaner

A small utility to clear out old model checkpoints in Google Cloud Buckets whilst keeping tensorboard event files
Python
2
star
79

groupoid-rl

Jupyter Notebook
2
star
80

irrlicht

Minetest's fork of Irrlicht
C++
2
star
81

lang-filter

Filter text files or archives by language
Python
1
star
82

eleuther-blog

here is the generated content for the EleutherAI blog. Source is from new-website repo
HTML
1
star
83

prefix-free-tokenizer

A prefix free tokenizer
Python
1
star
84

alignment-reader

Search and filter through alignment literature
JavaScript
1
star
85

grouch

HTML
1
star
86

perceptors

central location for access to pretrained models for CLIP and variants, with common API and out-of-the-box differentiable weighted multi-perceptor
1
star
87

classifier-latent-diffusion

Python
1
star
88

common-llm-settings

Common LLM Settings App
JavaScript
1
star
89

bayesian-adam

Exactly what it says on the tin
Python
1
star
90

pile-cord19

A script for collecting the CORD-19 dataset in a language modelling friendly format.
Python
1
star
91

conceptual-constraints

Applying LEACE to models during training
Jupyter Notebook
1
star
92

truncated-gaussian

Method-of-moments estimation and sampling for truncated multivariate Gaussian distributions
Python
1
star