• Stars
    star
    456
  • Rank 95,323 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 3 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)

T-Zero

This repository serves primarily as codebase and instructions for training, evaluation and inference of T0.

T0 is the model developed in Multitask Prompted Training Enables Zero-Shot Task Generalization. In this paper, we demonstrate that massive multitask prompted fine-tuning is extremely effective to obtain task zero-shot generalization. T0 outperforms or matches GPT-3 while being 16x smaller.

While the codebase in this repository mainly reproduces and replicates the training and evaluation of T0, it will be useful for future research on massively multitask fine-tuning.

Setup

  1. Download the repo
  2. Navigate to root directory of the repo
  3. Run pip install -e . to install the t0 module. Depending on your application you can run multiple flavors:
    1. seqio_tasks: Provide original seqio tasks used for the massively multitask fine-tuning. You can run pip install -e .[seqio_tasks] to install the extra requirements.

Contents

  • Training: reproducing (or replicating) the massively multitask fine-tuning
  • Evaluation: reproducing the main results reported in the paper
  • Inference: running inference with T0
  • Examples: fine-tuning T0 with additional datasets or prompts.

Released checkpoints

Below are the links to the models reported in our paper. We recommend using the T0++ checkpoint as it yields the best performance on the most tasks. Meanwhile, the T0 and T0+ checkpoints are intended for zero-shot evaluations on held-out tasks. See Sections 3 and 5 of our paper for more details.

If you don’t have enough resources to run T0, a smaller version with 3 billion parameters (T0 3B) is also available. Note that it is trained with the same mixture of datasets as T0 (not T0++).

Lastly, if you want to study the effect of multitask prompted training (a.k.a. instruction tuning) itself, the checkpoints from our ablation studies may be helpful. T0 Single Prompt trains on one prompt per dataset, while T0 Original Task Only trains on an average of 5.7 prompts per datasets (cf. T0 vanilla trains on 8.03 prompts per dataset). Using this series of checkpoints allows you to measure, for example, as you increase the number of prompts per dataset, how the performance on some held-out X increases/decreases or behavior on a linguistic diagnostic set changes. See Section 6.2 of our paper for more details.

Citation

If you find this resource useful, please cite the paper introducing T0:

@inproceedings{sanh2022multitask,
    title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
    author={Victor Sanh and Albert Webson and Colin Raffel and Stephen Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Teven Le Scao and Stella Biderman and Leo Gao and Thomas Wolf and Alexander M Rush},
    booktitle={International Conference on Learning Representations},
    year={2022},
    url={https://openreview.net/forum?id=9Vrb9D0WI4}
}

More Repositories

1

petals

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
Python
9,056
star
2

promptsource

Toolkit for creating, sharing and using natural language prompts.
Python
2,627
star
3

Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2
Python
1,305
star
4

bigscience

Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.
Shell
971
star
5

xmtf

Crosslingual Generalization through Multitask Finetuning
Jupyter Notebook
510
star
6

biomedical

Tools for curating biomedical training data for large-scale language modeling
Python
452
star
7

data-preparation

Code used for sourcing and cleaning the BigScience ROOTS corpus
Jupyter Notebook
297
star
8

lam

Libraries, Archives and Museums (LAM)
79
star
9

data_tooling

Tools for managing datasets for governance and training.
HTML
75
star
10

multilingual-modeling

BLOOM+1: Adapting BLOOM model to support a new unseen language
Python
69
star
11

evaluation

Code and Data for Evaluation WG
Python
41
star
12

data_sourcing

This directory gathers the tools developed by the Data Sourcing Working Group
Python
31
star
13

metadata

Experiments on including metadata such as URLs, timestamps, website descriptions and HTML tags during pretraining.
Python
30
star
14

model_card

24
star
15

tokenization

Python
11
star
16

carbon-footprint

A repository for `codecarbon` logs.
Jupyter Notebook
10
star
17

bloom-dechonk

A repo for running model shrinking experiments
Python
10
star
18

historical_texts

BigScience working group on language models for historical texts
Jupyter Notebook
8
star
19

catalogue_data

Scripts to prepare catalogue data
Jupyter Notebook
8
star
20

pii_processing

PII Processing code to detect and remediate PII in BigScience datasets. Reference implementation for the PII Hackathon
Python
8
star
21

training_dynamics

5
star
22

bibliography

A list of BigScience publications
TeX
3
star
23

scaling-laws-tokenization

scaling-laws-tokenization
2
star
24

datasets_stats

Generate statistics over datasets used in the context of BS
Makefile
2
star
25

evaluation-robustness-consistency

Tools for evaluating model robustness and consistency
Python
2
star
26

interpretability-ideas

1
star
27

evaluation-results

Dump of results for bigscience.
Python
1
star