• Stars
    star
    2,193
  • Rank 21,037 (Top 0.5 %)
  • Language
    Jupyter Notebook
  • License
    Apache License 2.0
  • Created almost 3 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

The hub for EleutherAI's work on interpretability and learning dynamics

Pythia: Interpreting Transformers Across Time and Scale

This repository is for EleutherAI's project Pythia which combines interpretability analysis and scaling laws to understand how knowledge develops and evolves during training in autoregressive transformers. For detailed info on the models, their training, and their behavior, please see our paper Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling.

Contents

Models

Params n_layers d_model n_heads d_head Batch Size Learning Rate Checkpoints Evaluations
Pythia-70M 6 512 8 64 2M 1e-3 Here Ready
Pythia-70M-Deduped 6 512 8 64 2M 1e-3 Here Ready
Pythia-160M 12 768 12 64 2M 6e-4 Here Ready
Pythia-160M-Deduped 12 768 12 64 2M 6e-4 Here Ready
Pythia-410M 24 1024 16 64 2M 3e-4 Here Ready
Pythia-410M-Deduped 24 1024 16 64 2M 3e-4 Here Ready
Pythia-1B 16 2048 8 256 2M 3e-4 Here Ready
Pythia-1B-Deduped 16 2048 8 256 2M 3e-4 Here Ready
Pythia-1.4B 24 2048 16 128 2M 2e-4 Here Ready
Pythia-1.4B-Deduped 24 2048 16 128 2M 2e-4 Here Ready
Pythia-2.8B 32 2560 32 80 2M 1.6e-4 Here Ready
Pythia-2.8B-Deduped 32 2560 32 80 2M 1.6e-4 Here Ready
Pythia-6.9B 32 4096 32 128 2M 1.2e-4 Here Ready
Pythia-6.9B-Deduped 32 4096 32 128 2M 1.2e-4 Here Ready
Pythia-12B 36 5120 40 128 2M 1.2e-4 Here Ready
Pythia-12B-Deduped 36 5120 40 128 2M 1.2e-4 Here Ready

We train and release a suite of 8 model sizes on 2 different datasets: the Pile, as well as the Pile with deduplication applied.

All 8 model sizes are trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 ~= 299.9B tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 ~= 2B tokens, evenly spaced throughout training. This corresponds to just under 1 epoch on the Pile for non-"deduped" models, and ~= 1.5 epochs on the deduped Pile (which contains 207B tokens in 1 epoch).

Config files used to train these models within the GPT-NeoX library can be found at the models/ directory within this repository.

We also upload the pre-tokenized data files and a script to reconstruct the dataloader as seen during training for all models. See Reproducing Training section for more details.

Changelog

[April 3, 2023] We have released a new version of all Pythia models, with the following changes to our training procedure:

  • All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens.
  • We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64, 128,256,512} in addition to every 1000 training steps.
  • Flash Attention was used in the new retrained suite. Empirically, this seems to have effected the dynamic range of model outputs in some cases, which we are investigating further.
  • We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1Γ— their maximum LR.
  • the new EleutherAI/pythia-1b is trained with bf16, because in fp16 the model corrupted due to loss spikes late in training.

The old models ("v0") remain available at https://huggingface.co/models?other=pythia_v0.

[January 20, 2023] On January 20, 2023, we chose to rename the Pythia model suite to include both embedding layer and unembedding layer parameters in our total parameter counts, in line with many other model suites and because we believe this convention better reflects the on-device memory usage of these models. We also discovered that due to a typo one of our models was smaller than we thought, and replaced it with a model of the intended size. See here for more details.

Quickstart

All Pythia models are hosted on the Huggingface hub. They can be loaded and used via the following code (shown for the 3rd pythia-70M-deduped model checkpoint):

from transformers import GPTNeoXForCausalLM, AutoTokenizer

model = GPTNeoXForCausalLM.from_pretrained(
  "EleutherAI/pythia-70m-deduped",
  revision="step3000",
  cache_dir="./pythia-70m-deduped/step3000",
)

tokenizer = AutoTokenizer.from_pretrained(
  "EleutherAI/pythia-70m-deduped",
  revision="step3000",
  cache_dir="./pythia-70m-deduped/step3000",
)

inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])

All models were trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens. Revision/branch step143000 (e.g. https://huggingface.co/EleutherAI/pythia-70m-deduped/tree/step143000) corresponds exactly to the model checkpoint on the main branch of each model.

We additionally have all model checkpoints in the format accepted by the GPT-NeoX library, with final-step checkpoints+optimizer states downloadable at EleutherAI/neox-ckpt-pythia-xxx-deduped-v1 but do not serve them for all steps at scale due to size of optimizer states and anticipated lower demand. If you would like to perform analysis using the intermediate models within the GPT-NeoX codebase, or would like the optimizer states for other steps, please email [email protected] and [email protected] to arrange access.

pythia-{size}-v0 models on Huggingface of sizes 160m, 410m, 1.4b were trained with a batch size of 4M tokens and were originally trained for 71500 steps instead, and checkpointed every 500 steps. The checkpoints on Huggingface for these v0 models are renamed for consistency with all 2M batch models, so step1000 is the first checkpoint for pythia-1.4b-v0 that was saved (corresponding to step 500 in training), and step1000 is likewise the first pythia-6.9b-v0 checkpoint that was saved (corresponding to 1000 "actual" steps.)

Reproducing Training

(Expanded reproduction instructions provided by @BaruchG ).

  1. We provide the training data for replication of our training runs. The GPT-NeoX library requires the pre-tokenized training data in the form of 2 memory-mapped numpy arrays: a .bin and .idx file. We provide these files, hosted on the Hugging Face hub. To download and use the deduplicated Pile training data, run:
git lfs clone https://huggingface.co/datasets/EleutherAI/pythia_deduped_pile_idxmaps

python utils/unshard_memmap.py --input_file ./pythia_deduped_pile_idxmaps/pile_0.87_deduped_text_document-00000-of-00082.bin --num_shards 83 --output_dir ./pythia_pile_idxmaps/

This will take over a day to run, though it should not require more than 5 GB of RAM. We recommend downloading this rather than retokenizing the Pile from scratch, in order to guarantee preservation of the data order seen by the Pythia models.

  1. Make a local copy of the tokenizer from the Pythia repo at https://github.com/EleutherAI/pythia/blob/main/utils/20B_tokenizer.json

  2. Run git clone https://github.com/EleutherAI/gpt-neox.git to clone the GPT-NeoX library. Once inside the repo run git checkout v1.0 to switch to the 1.0 branch which Pythia was trained with.

  3. Choose the Yaml of the model that you want to reproduce from https://github.com/EleutherAI/pythia/tree/main/models . Each size model has a Yaml for the standard Pile dataset and the deduplicated one. Make a local copy of your selected model’s yaml.

  4. Build the dockerfile contained in the v1.0 by going to the root directory of your cloned GPT-NeoX repository and running docker build -t pythia:latest . (assuming you have docker installed).

  5. After the container finishes building run the container using the following command (from the root of the GPT-NeoX repo with your pythia yaml accessible from within that folder):

docker run --runtime=nvidia --rm -it -e NVIDIA_VISIBLE_DEVICES=0,1,2,3 --shm-size=1g --ulimit memlock=-1 --mount type=bind,src=$PWD,dst=/gpt-neox -v $(pwd):/workspace/ pythia:latest bash

Use the -v argument to add more connected volumes for the dataset and the Yaml file if is not accessible from within the docker container.

  1. Change the lines of the data paths and tokenizer paths as follows:
  "train-data-paths": ["/fsx/pile/pile_20B_tokenizer_text_document"], #point this to your folder which was generated in step 1 containing the .bin and .idx file
  "valid-data-paths": ["/fsx/pile/pile_20B_tokenizer_text_document"], #point this to your folder which was generated in step 1 containing the .bin and .idx file
  "test-data-paths": ["/fsx/pile/pile_20B_tokenizer_text_document"], #point this to your folder which was generated in step 1 containing the .bin and .idx file

  "tokenizer-type": "HFTokenizer",
  "vocab-file": "/fsx/pile/20B_tokenizer.json", # point this to the tokenizer retrieved in step 2

You should additionally modify the total batch size (calculated via Total GPUs * train_micro_batch_size_per_gpu * gradient_accumulation_steps / (pipe-parallel-size * model-parallel-size)) to be 1024 to match the Pythia training batch size. Total GPU counts for each Pythia training run can be observed in comments in the yaml file.

   "train_micro_batch_size_per_gpu": XXX, # make this a value that will fit within your GPU memory
   "gradient_accumulation_steps": 1, # make this a value to compensate to make the total batch size 1024.

If you would like your weights to be saved add that information to the yaml file as well. For example, to save in the checkpoints folder, at the bottom you can add:

  "launcher": "slurm",
  "deepspeed_slurm": false,

  "save": "checkpoints",
  "load": "checkpoints",
  "checkpoint_validation_with_forward_pass": False,
}

Make sure the paths are the paths from inside your docker container and if you want the weights to have persistence, make sure that they are accessible from outside the container, for example in /workspace/ .

  1. Pip install flash attention by running pip install -r requirements/requirements-flashattention.txt from within the GPT-NeoX repository root folder inside the docker container.

  2. You should now be able to start training your model by running (modify the path to your yaml file):

python deepy.py train.py /workspace/pythia/models/70M/pythia-70m.yml  2>&1 | tee output.txt

the output will be saved to output.txt, if you don’t want that just delete the end.

  1. Once training is completed you can then benchmark your weights if desired. The most straightforward way to do this is using EleutherAI’s LM Evalutation Harness at https://github.com/EleutherAI/lm-evaluation-harness.
    In order to use that with your saved out weights you must first convert them from GPT-NeoX format to Huggingface format. This can be done from inside the GPT-NeoX repository with the script at tools/convert_to_hf.py.
    If you are using the v1.0 of GPT-NeoX you may have to add from typing import List to the type of the file and change the line at https://github.com/EleutherAI/gpt-neox/blob/71df4d5017f9f4919566a11454fe3a507ffdc632/tools/convert_to_hf.py#L44 from list[torch.Tensor] to List[torch.Tensor]. You can then run the script like this to convert the weights at step 143000:
python tools/convert_to_hf.py --input_dir checkpoints/global_step143000/ --config_file checkpoints2/global_step 143000/configs/pythia-70m.yml --output_dir ./output/ 

This should output a file structure similar to the one found at https://huggingface.co/EleutherAI/pythia-70m-deduped/tree/main.

  1. If your tokenizer_config.json looks different than the one at https://huggingface.co/EleutherAI/pythia-70m-deduped/blob/main/tokenizer_config.json and special_tokens_map.json look different than https://huggingface.co/EleutherAI/pythia-70m-deduped/blob/main/special_tokens_map.json you may need to replace them with the ones on Huggingface. If you don’t do this some of the tests in the Harness may not work.

  2. You should then be able to set up your environment for benchmarking. The containers at https://hub.docker.com/r/huggingface/transformers-pytorch-gpu/tags should work for this and have worked with the 4.28 and 4.29 versions. After setting up that docker container run:

git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .

as outlined in the Harness repository.

  1. You should then be able to run the benchmark by pointing it at your weights (which should be in your container) by running a command similar to this:
python3 main.py     --model hf-causal-experimental     --model_args pretrained=../gpt-neox/output/     --tasks lambada_openai,piqa,winogrande,arc_easy,sciq,wikitext     --device cuda:3

which should output your results.

Dataset Viewer

We provide a tool to view particular portions of the training dataloader used by all models during training, at utils/batch_viewer.py.

This tool requires the inspect_idxmap branch of GPT-NeoX as a git submodule, so you must check out the repository via

git clone --recurse-submodules https://github.com/EleutherAI/pythia
cd pythia

or, if you have already cloned the repository, run

git submodule update --init --recursive

Next, we must install dependencies:

pip install torch==1.13.0+cu117 -f https://download.pytorch.org/whl/torch/
cd utils/gpt-neox
pip install -r requirements/requirements.txt

Additionally, we are required to build C++ helpers used by the Megatron dataloader. You can do this via:

cd /utils/gpt-neox/megatron/data
make
cd -

Now, we're all set up to run utils/batch_viewer.py !

To run, first substitute the filepath to your copy of the downloaded and resharded .bin and .idx files for either the Pile or deduplicated Pile in utils/dummy_config.yml.

PYTHONPATH=utils/gpt-neox/ python utils/batch_viewer.py \
  --start_iteration 0 \
  --end_iteration 1000 \
  --mode save \
  --save_path .../.../.../... \
  --conf_dir utils/dummy_config.yml 

Passing --mode save will save a separate file containing each batch as a numpy array.

Passing --mode custom will save a dictionary for each batch to a JSONL file--it can be used to compute arbitrary statistics over each batch seen during training.

Pythia Paper Replication

We provide further information for those interested in replicating our case studies performed in the Pythia suite paper, being

  • Memorization density over training
  • Intervention on pronoun frequencies in pretraining
  • Term frequency effects over training

Further information is accessible in /case-studies in this repository.

Benchmark Scores

We also provide benchmark 0-shot and 5-shot results on a variety of NLP datasets:

  • Lambada (lambada_openai)
  • Wikitext (wikitext)
  • PiQA (piqa)
  • SciQ (sciq)
  • WSC (wsc)
  • Winogrande (winogrande)
  • ARC-challenge (arc_challenge)
  • ARC-easy (arc_easy)
  • LogiQA (logiqa)
  • BLiMP (blimp_*)
  • MMLU (hendrycksTest*)

Evaluations were performed in GPT-NeoX using the LM Evaluation Harness, and are viewable by model and step at evals/pythia-v1/*/* in this repository.

Other Papers

Aside from the Pythia suite, this repository also acts as a hub containing information, code, and reproducibility instructions for the following papers:

Citation information for other papers in this repository are included in their respective folders.

Citation Details

If you use the Pythia models or data in your research, please consider citing our paper via:

@misc{biderman2023pythia,
      title={Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling}, 
      author={Stella Biderman and Hailey Schoelkopf and Quentin Anthony and Herbie Bradley and Kyle O'Brien and Eric Hallahan and Mohammad Aflah Khan and Shivanshu Purohit and USVSN Sai Prashanth and Edward Raff and Aviya Skowron and Lintang Sutawika and Oskar van der Wal},
      year={2023},
      eprint={2304.01373},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

License

   Copyright 2023 EleutherAI

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.

More Repositories

1

gpt-neo

An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
Python
8,224
star
2

gpt-neox

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
Python
6,829
star
3

lm-evaluation-harness

A framework for few-shot evaluation of language models.
Python
6,268
star
4

the-pile

Python
1,459
star
5

math-lm

Python
1,035
star
6

cookbook

Deep learning for dummies. All the practical details and useful utilities that go into working with real models.
Python
635
star
7

polyglot

Polyglot: Large Language Models of Well-balanced Competence in Multi-languages
471
star
8

DALLE-mtf

Open-AI's DALL-E for large scale training in mesh-tensorflow.
Python
434
star
9

vqgan-clip

Jupyter Notebook
345
star
10

sae

Sparse autoencoders
Python
274
star
11

concept-erasure

Erasing concepts from neural representations with provable guarantees
Python
207
star
12

elk

Keeping language models honest by directly eliciting knowledge encoded in their activations.
Python
186
star
13

oslo

OSLO: Open Source for Large-scale Optimization
Python
173
star
14

lm_perplexity

Python
144
star
15

knowledge-neurons

A library for finding knowledge neurons in pretrained transformer models.
Python
142
star
16

pyfra

Python Research Framework
Python
107
star
17

dps

Data processing system for polyglot
Python
88
star
18

openwebtext2

Python
86
star
19

info

(Deprecated) A hub for onboarding & other information.
78
star
20

improved-t5

Experiments for efforts to train a new and improved t5
Python
76
star
21

stackexchange-dataset

Python tools for processing the stackexchange data dumps into a text dataset for Language Models
Python
73
star
22

project-menu

See the issue board for the current status of active and prospective projects!
65
star
23

magiCARP

One stop shop for all things carp
Python
58
star
24

sae-auto-interp

Python
53
star
25

semantic-memorization

Jupyter Notebook
44
star
26

tqdm-multiprocess

Using queues, tqdm-multiprocess supports multiple worker processes, each with multiple tqdm progress bars, displaying them cleanly through the main process. It offers similar functionality for python logging.
Python
41
star
27

aria

Python
37
star
28

hae-rae

32
star
29

rnngineering

Engineering the state of RNN language models (Mamba, RWKV, etc.)
Jupyter Notebook
31
star
30

features-across-time

Understanding how features learned by neural networks evolve throughout training
Python
30
star
31

mp_nerf

Massively-Parallel Natural Extension of Reference Frame
Jupyter Notebook
29
star
32

elk-generalization

Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from easy questions to hard
Python
23
star
33

pile-pubmedcentral

A script for collecting the PubMed Central dataset in a language modelling friendly format.
Python
22
star
34

best-download

URL downloader supporting checkpointing and continuous checksumming.
Python
19
star
35

polyglot-data

data related codebase for polyglot project
Python
19
star
36

aria-amt

Efficient and robust implementation of seq-to-seq automatic piano transcription.
Python
18
star
37

text-generation-testing-ui

Web app for demoing the EAI models
JavaScript
16
star
38

exploring-contrastive-topology

Jupyter Notebook
16
star
39

mdl

Minimum Description Length probing for neural network representations
Python
15
star
40

pile_dedupe

Pile Deduplication Code
Python
15
star
41

w2s

Python
15
star
42

pilev2

Python
13
star
43

distilling

Experiments with distilling large language models.
Python
13
star
44

tokengrams

Efficiently computing & storing token n-grams from large corpora
Rust
13
star
45

lm-eval2

Python
11
star
46

equivariance

A framework for implementing equivariant DL
Jupyter Notebook
10
star
47

radioactive-lab

Adapting the "Radioactive Data" paper to work for text models
Python
9
star
48

pile-literotica

Download, parse, and filter data from Literotica. Data-ready for The-Pile.
Python
8
star
49

hn-scraper

Python
8
star
50

tagged-pile

Part-of-Speech Tagging for the Pile and RedPajama
Python
8
star
51

multimodal-fid

Python
7
star
52

pile-uspto

A script for collecting the USPTO Backgrounds dataset in a language modelling friendly format.
Python
7
star
53

pile-cc-filtering

The code used to filter CC data for The Pile
Python
6
star
54

minetest-baselines

Baseline agents for Minetest tasks.
Python
6
star
55

CodeCARP

Data collection pipeline for CodeCARP. Includes PyCharm plugins.
6
star
56

pile-enron-emails

A script for collecting the Enron Emails dataset in a language modelling friendly format.
Python
6
star
57

pile-explorer

For exploring the data and documenting its limitations
Python
5
star
58

minetest-interpretabilty-notebook

Jupyter notebook for the interpretablity section of the minetester blog post
Jupyter Notebook
5
star
59

thonkenizers

yes
5
star
60

eleutherai.github.io

This is the Hugo generated website for eleuther.ai. The source of this build is new-website repo.
HTML
5
star
61

visual-grounding

Visually ground GPT-Neo 1.3b and 2.7b
Python
5
star
62

LLM-Markov-Chains

Project github for LLM Markov Chains Project
5
star
63

architecture-experiments

Repository to host architecture experiments and development using Paxml and Praxis
Python
5
star
64

llemma-sample-explorer

Sample explorer tool for the Llemma models.
HTML
5
star
65

lm-scope

Jupyter Notebook
4
star
66

latent-video-diffusion

Latent video diffusion
Python
4
star
67

megatron-3d

Python
4
star
68

website

New website for EleutherAI based on Hugo static site generator
HTML
4
star
69

Unpaired-Image-Generation

Project Repo for Unpaired Image Generation project
4
star
70

ccs

Python
4
star
71

isaac-mchorse

EleutherAI's discord bot
Python
3
star
72

pile-allpoetry

Scraper to gather poems from allpoetry.com
Python
3
star
73

EvilModel

A replication of "EvilModel 2.0: Bringing Neural Network Models into Malware Attacks"
3
star
74

eai-prompt-gallery

Library of interesting prompt generations
JavaScript
3
star
75

variance-across-time

Studying the variance in neural net predictions across training time
Python
3
star
76

pile-ubuntu-irc

A script for collecting the Ubuntu IRC dataset in a language modelling friendly format.
Python
3
star
77

reddit-comment-processing

Python
2
star
78

eleutherai-instruct-dataset

A large instruct dataset for open-source models (WIP).
2
star
79

bucket-cleaner

A small utility to clear out old model checkpoints in Google Cloud Buckets whilst keeping tensorboard event files
Python
2
star
80

groupoid-rl

Jupyter Notebook
2
star
81

equinox-llama

Equinox implementation of llama3 and llama3.1
Python
2
star
82

optax-galore

Adds GaLore style projection wrappers to optax optimizers
Python
2
star
83

lang-filter

Filter text files or archives by language
Python
1
star
84

eleuther-blog

here is the generated content for the EleutherAI blog. Source is from new-website repo
HTML
1
star
85

prefix-free-tokenizer

A prefix free tokenizer
Python
1
star
86

alignment-reader

Search and filter through alignment literature
JavaScript
1
star
87

grouch

HTML
1
star
88

language-adaptation

1
star
89

perceptors

central location for access to pretrained models for CLIP and variants, with common API and out-of-the-box differentiable weighted multi-perceptor
1
star
90

pd-books

Jupyter Notebook
1
star
91

classifier-latent-diffusion

Python
1
star
92

common-llm-settings

Common LLM Settings App
JavaScript
1
star
93

bayesian-adam

Exactly what it says on the tin
Python
1
star
94

pile-cord19

A script for collecting the CORD-19 dataset in a language modelling friendly format.
Python
1
star
95

conceptual-constraints

Applying LEACE to models during training
Jupyter Notebook
1
star
96

ngrams-across-time

Jupyter Notebook
1
star
97

steering-llama3

Python
1
star
98

truncated-gaussian

Method-of-moments estimation and sampling for truncated multivariate Gaussian distributions
Python
1
star