• Stars
    star
    9,056
  • Rank 3,960 (Top 0.08 %)
  • Language
    Python
  • License
    MIT License
  • Created over 2 years ago
  • Updated 13 days ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading


Run 100B+ language models at home, BitTorrent-style.
Fine-tuning and inference up to 10x faster than offloading


Generate text using distributed 176B-parameter BLOOM or BLOOMZ and fine-tune them for your own tasks:

from petals import DistributedBloomForCausalLM

model = DistributedBloomForCausalLM.from_pretrained("bigscience/bloom-petals", tuning_mode="ptune", pre_seq_len=16)
# Embeddings & prompts are on your device, BLOOM blocks are distributed across the Internet

inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"]
outputs = model.generate(inputs, max_new_tokens=5)
print(tokenizer.decode(outputs[0]))  # A cat sat on a mat...

# Fine-tuning (updates only prompts or adapters hosted locally)
optimizer = torch.optim.AdamW(model.parameters())
for input_ids, labels in data_loader:
    outputs = model.forward(input_ids)
    loss = cross_entropy(outputs.logits, labels)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

πŸš€ Β Try now in Colab

πŸ” Your data will be processed by other people in the public swarm. Learn more about privacy here. For sensitive data, you can set up a private swarm among people you trust.

Connect your GPU and increase Petals capacity

Run our Docker image (works on Linux, macOS, and Windows with WSL2):

sudo docker run -p 31330:31330 --ipc host --gpus all --volume petals-cache:/cache --rm \
    learningathome/petals:main python -m petals.cli.run_server bigscience/bloom-petals --port 31330

Or run these commands in an Anaconda env (requires Linux and Python 3.7+):

conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia
pip install -U petals
python -m petals.cli.run_server bigscience/bloom-petals

πŸ“š See FAQ to learn how to configure the server to use multiple GPUs, address common issues, etc.

You can also host BLOOMZ, a version of BLOOM fine-tuned to follow human instructions in the zero-shot regime β€” just replace bloom-petals with bloomz-petals.

πŸ”’ Hosting a server does not allow others to run custom code on your computer. Learn more about security here.

πŸ’¬ If you have any issues or feedback, let us know on our Discord server!

Check out tutorials, examples, and more

Basic tutorials:

  • Getting started: tutorial
  • Prompt-tune BLOOM to create a personified chatbot: tutorial
  • Prompt-tune BLOOM for text semantic classification: tutorial

Useful tools and advanced guides:

Learning more:

  • Frequently asked questions: FAQ
  • In-depth system description: paper

πŸ“‹ If you build an app running BLOOM with Petals, make sure it follows the BLOOM's terms of use.

How does it work?

  • Petals runs large language models like BLOOM-176B collaboratively β€” you load a small part of the model, then team up with people serving the other parts to run inference or fine-tuning.
  • Single-batch inference runs at β‰ˆ 1 sec per step (token) β€” up to 10x faster than offloading, enough for chatbots and other interactive apps. Parallel inference reaches hundreds of tokens/sec.
  • Beyond classic language model APIs β€” you can employ any fine-tuning and sampling methods, execute custom paths through the model, or see its hidden states. You get the comforts of an API with the flexibility of PyTorch.

πŸ“š Β See FAQ Β Β Β Β Β Β Β Β Β Β  πŸ“œ Β Read paper

Installation

Here's how to install Petals with Anaconda on Linux:

conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia
pip install -U petals

If you don't use Anaconda, you can install PyTorch in any other way. If you want to run models with 8-bit weights, please install PyTorch with CUDA 11.x or newer for compatility with bitsandbytes.

See the instructions for macOS and Windows, the full requirements, and troubleshooting advice in our FAQ.

Benchmarks

The benchmarks below are for BLOOM-176B:

Network Single-batch inference
(steps/s)
Parallel forward
(tokens/s)
Bandwidth Round-trip
latency
Sequence length Batch size
128 2048 1 64
Offloading, max. possible speed on 1x A100 1
256 Gbit/s 0.18 0.18 2.7 170.3
128 Gbit/s 0.09 0.09 2.4 152.8
Petals on 14 heterogeneous servers across Europe and North America 2
Real world 0.83 0.79 32.6 179.4
Petals on 3 servers, with one A100 each 3
1 Gbit/s < 5 ms 1.71 1.54 70.0 253.6
100 Mbit/s < 5 ms 1.66 1.49 56.4 182.0
100 Mbit/s 100 ms 1.23 1.11 19.7 112.2

1 An upper bound for offloading performance. We base our offloading numbers on the best possible hardware setup for offloading: CPU RAM offloading via PCIe 4.0 with 16 PCIe lanes per GPU and PCIe switches for pairs of GPUs. We assume zero latency for the upper bound estimation. In 8-bit, the model uses 1 GB of memory per billion parameters. PCIe 4.0 with 16 lanes has a throughput of 256 Gbit/s, so offloading 176B parameters takes 5.5 seconds. The throughput is twice as slow (128 Gbit/s) if we have two GPUs behind the same PCIe switch.

2 A real-world distributed setting with 14 servers holding 2Γ— RTX 3060, 4Γ— 2080Ti, 2Γ— 3090, 2Γ— A4000, and 4Γ— A5000 GPUs. These are personal servers and servers from university labs, spread across Europe and North America and connected to the Internet at speeds of 100–1000 Mbit/s. 4 servers operate from under firewalls.

3 An optimistic setup that requires least communication. The client nodes have 8 CPU cores and no GPU.

We provide more evaluations and discuss these results in more detail in Section 3.3 of our paper.

πŸ› οΈ Contributing

Please see our FAQ on contributing.

πŸ“œ Citation

Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, and Colin Raffel. Petals: Collaborative Inference and Fine-tuning of Large Models. arXiv preprint arXiv:2209.01188, 2022.

@article{borzunov2022petals,
  title = {Petals: Collaborative Inference and Fine-tuning of Large Models},
  author = {Borzunov, Alexander and Baranchuk, Dmitry and Dettmers, Tim and Ryabinin, Max and Belkada, Younes and Chumachenko, Artem and Samygin, Pavel and Raffel, Colin},
  journal = {arXiv preprint arXiv:2209.01188},
  year = {2022},
  url = {https://arxiv.org/abs/2209.01188}
}

This project is a part of the BigScience research workshop.

More Repositories

1

promptsource

Toolkit for creating, sharing and using natural language prompts.
Python
2,627
star
2

Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2
Python
1,305
star
3

bigscience

Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.
Shell
971
star
4

xmtf

Crosslingual Generalization through Multitask Finetuning
Jupyter Notebook
510
star
5

t-zero

Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)
Python
456
star
6

biomedical

Tools for curating biomedical training data for large-scale language modeling
Python
452
star
7

data-preparation

Code used for sourcing and cleaning the BigScience ROOTS corpus
Jupyter Notebook
297
star
8

lam

Libraries, Archives and Museums (LAM)
79
star
9

data_tooling

Tools for managing datasets for governance and training.
HTML
75
star
10

multilingual-modeling

BLOOM+1: Adapting BLOOM model to support a new unseen language
Python
69
star
11

evaluation

Code and Data for Evaluation WG
Python
41
star
12

data_sourcing

This directory gathers the tools developed by the Data Sourcing Working Group
Python
31
star
13

metadata

Experiments on including metadata such as URLs, timestamps, website descriptions and HTML tags during pretraining.
Python
30
star
14

model_card

24
star
15

tokenization

Python
11
star
16

carbon-footprint

A repository for `codecarbon` logs.
Jupyter Notebook
10
star
17

bloom-dechonk

A repo for running model shrinking experiments
Python
10
star
18

historical_texts

BigScience working group on language models for historical texts
Jupyter Notebook
8
star
19

catalogue_data

Scripts to prepare catalogue data
Jupyter Notebook
8
star
20

pii_processing

PII Processing code to detect and remediate PII in BigScience datasets. Reference implementation for the PII Hackathon
Python
8
star
21

training_dynamics

5
star
22

bibliography

A list of BigScience publications
TeX
3
star
23

scaling-laws-tokenization

scaling-laws-tokenization
2
star
24

datasets_stats

Generate statistics over datasets used in the context of BS
Makefile
2
star
25

evaluation-robustness-consistency

Tools for evaluating model robustness and consistency
Python
2
star
26

interpretability-ideas

1
star
27

evaluation-results

Dump of results for bigscience.
Python
1
star