• Stars
    star
    301
  • Rank 138,451 (Top 3 %)
  • Language
    Jupyter Notebook
  • License
    Apache License 2.0
  • Created over 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code used for sourcing and cleaning the BigScience ROOTS corpus

Data prepation

This repository contains all the tools and code used to build the ROOTS dataset produced by the BigScience initiative to train the BLOOM models as well as a reduced version to train the tokenizer.

General pipeline for the preparation of the ROOTS dataset

diagram_preprocessing_roots

More detail on the process, including the specifics of the cleaning, filtering, and deduplication operations, can be found in Sections 2 "(Crowd)Sourcing a Language Resource Catalogue" and 3 "Processing OSCAR" of our paper on the ROOTS dataset creation.

Key resources

Code for making the Pseudo-Crawl dataset

Filtering library used to filter OSCAR

Code used to run preprocessing pipeline on crowdsourced dataset

Code used for the tokenizer's training dataset

Code used for making the analysis and plots for the paper

Citation

@inproceedings{
bigscience-roots:2022,
title={The BigScience {ROOTS} Corpus: A 1.6{TB} Composite Multilingual Dataset},
author={Hugo Lauren{\c{c}}on and Lucile Saulnier and Thomas Wang and Christopher Akiki and Albert Villanova del Moral and Teven Le Scao and Leandro Von Werra and Chenghao Mou and Eduardo Gonz{\'a}lez Ponferrada and Huu Nguyen and J{\"o}rg Frohberg and Mario {\v{S}}a{\v{s}}ko and Quentin Lhoest and Angelina McMillan-Major and G{\'e}rard Dupont and Stella Biderman and Anna Rogers and Loubna Ben allal and Francesco De Toni and Giada Pistilli and Olivier Nguyen and Somaieh Nikpoor and Maraim Masoud and Pierre Colombo and Javier de la Rosa and Paulo Villegas and Tristan Thrush and Shayne Longpre and Sebastian Nagel and Leon Weber and Manuel Romero Mu{\~n}oz and Jian Zhu and Daniel Van Strien and Zaid Alyafeai and Khalid Almubarak and Vu Minh Chien and Itziar Gonzalez-Dios and Aitor Soroa and Kyle Lo and Manan Dey and Pedro Ortiz Suarez and Aaron Gokaslan and Shamik Bose and David Ifeoluwa Adelani and Long Phan and Hieu Tran and Ian Yu and Suhas Pai and Jenny Chim and Violette Lepercq and Suzana Ilic and Margaret Mitchell and Sasha Luccioni and Yacine Jernite},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=UoEw6KigkUn}
}

More Repositories

1

petals

๐ŸŒธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
Python
9,056
star
2

promptsource

Toolkit for creating, sharing and using natural language prompts.
Python
2,627
star
3

Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2
Python
1,327
star
4

bigscience

Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.
Shell
977
star
5

xmtf

Crosslingual Generalization through Multitask Finetuning
Jupyter Notebook
510
star
6

t-zero

Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)
Python
456
star
7

biomedical

Tools for curating biomedical training data for large-scale language modeling
Python
454
star
8

lam

Libraries, Archives and Museums (LAM)
79
star
9

data_tooling

Tools for managing datasets for governance and training.
HTML
77
star
10

multilingual-modeling

BLOOM+1: Adapting BLOOM model to support a new unseen language
Python
69
star
11

evaluation

Code and Data for Evaluation WG
Python
41
star
12

data_sourcing

This directory gathers the tools developed by the Data Sourcing Working Group
Python
31
star
13

metadata

Experiments on including metadata such as URLs, timestamps, website descriptions and HTML tags during pretraining.
Python
30
star
14

model_card

24
star
15

tokenization

Python
11
star
16

carbon-footprint

A repository for `codecarbon` logs.
Jupyter Notebook
10
star
17

bloom-dechonk

A repo for running model shrinking experiments
Python
10
star
18

historical_texts

BigScience working group on language models for historical texts
Jupyter Notebook
8
star
19

catalogue_data

Scripts to prepare catalogue data
Jupyter Notebook
8
star
20

pii_processing

PII Processing code to detect and remediate PII in BigScience datasets. Reference implementation for the PII Hackathon
Python
8
star
21

training_dynamics

5
star
22

bibliography

A list of BigScience publications
TeX
3
star
23

scaling-laws-tokenization

scaling-laws-tokenization
2
star
24

datasets_stats

Generate statistics over datasets used in the context of BS
Makefile
2
star
25

evaluation-robustness-consistency

Tools for evaluating model robustness and consistency
Python
2
star
26

interpretability-ideas

1
star
27

evaluation-results

Dump of results for bigscience.
Python
1
star