• Stars
    star
    24
  • Rank 980,755 (Top 20 %)
  • Language
  • License
    Apache License 2.0
  • Created over 2 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

More Repositories

1

petals

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
Python
9,056
star
2

promptsource

Toolkit for creating, sharing and using natural language prompts.
Python
2,627
star
3

Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2
Python
1,305
star
4

bigscience

Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.
Shell
971
star
5

xmtf

Crosslingual Generalization through Multitask Finetuning
Jupyter Notebook
510
star
6

t-zero

Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)
Python
456
star
7

biomedical

Tools for curating biomedical training data for large-scale language modeling
Python
452
star
8

data-preparation

Code used for sourcing and cleaning the BigScience ROOTS corpus
Jupyter Notebook
297
star
9

lam

Libraries, Archives and Museums (LAM)
79
star
10

data_tooling

Tools for managing datasets for governance and training.
HTML
75
star
11

multilingual-modeling

BLOOM+1: Adapting BLOOM model to support a new unseen language
Python
69
star
12

evaluation

Code and Data for Evaluation WG
Python
41
star
13

data_sourcing

This directory gathers the tools developed by the Data Sourcing Working Group
Python
31
star
14

metadata

Experiments on including metadata such as URLs, timestamps, website descriptions and HTML tags during pretraining.
Python
30
star
15

tokenization

Python
11
star
16

carbon-footprint

A repository for `codecarbon` logs.
Jupyter Notebook
10
star
17

bloom-dechonk

A repo for running model shrinking experiments
Python
10
star
18

historical_texts

BigScience working group on language models for historical texts
Jupyter Notebook
8
star
19

catalogue_data

Scripts to prepare catalogue data
Jupyter Notebook
8
star
20

pii_processing

PII Processing code to detect and remediate PII in BigScience datasets. Reference implementation for the PII Hackathon
Python
8
star
21

training_dynamics

5
star
22

bibliography

A list of BigScience publications
TeX
3
star
23

scaling-laws-tokenization

scaling-laws-tokenization
2
star
24

datasets_stats

Generate statistics over datasets used in the context of BS
Makefile
2
star
25

evaluation-robustness-consistency

Tools for evaluating model robustness and consistency
Python
2
star
26

interpretability-ideas

1
star
27

evaluation-results

Dump of results for bigscience.
Python
1
star