Deep learning for dummies, by Quentin Anthony, Jacob Hatef, Hailey Schoelkopf, and Stella Biderman
All the practical details and utilities that go into working with real models! If you're just getting started, we recommend jumping ahead to Basics for some introductory resources on transformers.
For training/inference calculations (e.g. FLOPs, memory overhead, and parameter count)
Useful external calculators include
Cerebras Model Lab. User-friendly tool to apply Chinchilla scaling laws.
Transformer Training and Inference VRAM Estimator by Alexander Smirnov. A user-friendly tool to estimate VRAM overhead.
Communication benchmarks
Transformer sizing and GEMM benchmarks
LLM Visualizations. Clear LLM visualizations and animations for basic transformer understanding.
Annotated PyTorch Paper Implementations
Jay Alammar's blog contains many blog posts pitched to be accessible to a wide range of backgrounds. We recommend his posts the Illustrated Transformer, and the Illustrated GPT-2 in particular.
The Annotated Transformer by Sasha Rush, Austin Huang, Suraj Subramanian, Jonathan Sum, Khalid Almubarak, and Stella Biderman. A walk through of the seminal paper "Attention is All You Need" along with in-line implementations in PyTorch.
Transformers Math 101. A blog post from EleutherAI on training/inference memory estimations, parallelism, FLOP calculations, and deep learning datatypes.
Transformer Inference Arithmetic. A breakdown on the memory overhead, FLOPs, and latency of transformer inference
LLM Finetuning Memory Requirements by Alex Birch. A practical guide on the memory overhead of finetuning models.
Everything about Distributed Training and Efficient Finetuning by Sumanth R Hegde. High-level descriptions and links on parallelism and efficient finetuning.
Efficient Training on Multiple GPUs by Hugging Face. Contains a detailed walk-through of model, tensor, and data parallelism along with the ZeRO optimizer.
Papers
- Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM
- Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
- ZeRO: Memory Optimizations Toward Training Trillion Parameter Models
- PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel
- PyTorch Distributed: Experiences on Accelerating Data Parallel Training
ML-Engineering Repository. Containing community notes and practical details of everything deep learning training led by Stas Bekman
Common HParam Settings by Stella Biderman. Records common settings for model training hyperparameters and her current recommendations for training new models.
Directory of LLMs by Stella Biderman. Records details of trained LLMs including license, architecture type, and dataset.
Data Provenance Explorer A tool for tracing and filtering on data provenance for the most popular open source finetuning data collections.
Large language models are frequently trained using very complex codebases due to the need to optimize things to work at scale and support a wide variety of configurable options. This can make them less useful pedagogical tools, so some people have developed striped-down so-called "Minimal Implementations" that are sufficient for smaller scale work and more pedagogically useful.
GPT Inference
GPT Training
Architecture-Specific Examples
- https://github.com/zphang/minimal-gpt-neox-20b
- https://github.com/zphang/minimal-llama
- https://github.com/zphang/minimal-opt
If you found a bug, typo, or would like to propose an improvement please don't hesitate to open an Issue or contribute a PR.
If you found this repository helpful, please consider citing it using
@misc{anthony2024cookbook,
title = {{The EleutherAI Model Training Cookbook}},
author = {Anthony, Quentin and Hatef, Jacob and Schoelkopf, Hailey and Biderman, Stella},
howpublished = {GitHub Repo},
url = {https://github.com/EleutherAI/cookbook},
year = {2024}
}