• Stars
    star
    1,832
  • Rank 24,395 (Top 0.5 %)
  • Language MDX
  • License
    Apache License 2.0
  • Created over 2 years ago
  • Updated about 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

The Hugging Face course on Transformers

The Hugging Face Course

This repo contains the content that's used to create the Hugging Face course. The course teaches you about applying Transformers to various tasks in natural language processing and beyond. Along the way, you'll learn how to use the Hugging Face ecosystem — 🤗 Transformers, 🤗 Datasets, 🤗 Tokenizers, and 🤗 Accelerate — as well as the Hugging Face Hub. It's completely free and open-source!

🌎 Languages and translations

Language Source Authors
English chapters/en @sgugger, @lewtun, @LysandreJik, @Rocketknight1, @sashavor, @osanseviero, @SaulLu, @lvwerra
Bengali (WIP) chapters/bn @avishek-018, @eNipu
German (WIP) chapters/de @JesperDramsch, @MarcusFra, @fabridamicelli
Spanish (WIP) chapters/es @camartinezbu, @munozariasjm, @fordaz
Persian (WIP) chapters/fa @jowharshamshiri, @schoobani
French chapters/fr @lbourdois, @ChainYo, @melaniedrevet, @abdouaziz
Gujarati (WIP) chapters/gu @pandyaved98
Hebrew (WIP) chapters/he @omer-dor
Hindi (WIP) chapters/hi @pandyaved98
Bahasa Indonesia (WIP) chapters/id @gstdl
Italian (WIP) chapters/it @CaterinaBi, @ClonedOne, @Nolanogenn, @EdAbati, @gdacciaro
Japanese (WIP) chapters/ja @hiromu166, @younesbelkada, @HiromuHota
Korean (WIP) chapters/ko @Doohae, @wonhyeongseo, @dlfrnaos19, @nsbg
Portuguese (WIP) chapters/pt @johnnv1, @victorescosta, @LincolnVS
Russian (WIP) chapters/ru @pdumin, @svv73
Thai (WIP) chapters/th @peeraponw, @a-krirk, @jomariya23156, @ckingkan
Turkish (WIP) chapters/tr @tanersekmen, @mertbozkir, @ftarlaci, @akkasayaz
Vietnamese chapters/vi @honghanhh
Chinese (simplified) chapters/zh-CN @zhlhyx, petrichor1122, @1375626371
Chinese (traditional) (WIP) chapters/zh-TW @davidpeng86

Translating the course into your language

As part of our mission to democratise machine learning, we'd love to have the course available in many more languages! Please follow the steps below if you'd like to help translate the course into your language 🙏.

🗞️ Open an issue

To get started, navigate to the Issues page of this repo and check if anyone else has opened an issue for your language. If not, open a new issue by selecting the Translation template from the New issue button.

Once an issue is created, post a comment to indicate which chapters you'd like to work on and we'll add your name to the list.

🗣 Join our Discord

Since it can be difficult to discuss translation details quickly over GitHub issues, we have created dedicated channels for each language on our Discord server. If you'd like to join, follow the instructions at this channel 👉: https://discord.gg/JfAtkvEtRb

🍴 Fork the repository

Next, you'll need to fork this repo. You can do this by clicking on the Fork button on the top-right corner of this repo's page.

Once you've forked the repo, you'll want to get the files on your local machine for editing. You can do that by cloning the fork with Git as follows:

git clone https://github.com/YOUR-USERNAME/course

📋 Copy-paste the English files with a new language code

The course files are organised under a main directory:

  • chapters: all the text and code snippets associated with the course.

You'll only need to copy the files in the chapters/en directory, so first navigate to your fork of the repo and run the following:

cd ~/path/to/course
cp -r chapters/en/CHAPTER-NUMBER chapters/LANG-ID/CHAPTER-NUMBER

Here, CHAPTER-NUMBER refers to the chapter you'd like to work on and LANG-ID should be one of the ISO 639-1 or ISO 639-2 language codes -- see here for a handy table.

✍️ Start translating

Now comes the fun part - translating the text! The first thing we recommend is translating the part of the _toctree.yml file that corresponds to your chapter. This file is used to render the table of contents on the website and provide the links to the Colab notebooks. The only fields you should change are the title, ones -- for example, here are the parts of _toctree.yml that we'd translate for Chapter 0:

- title: 0. Setup # Translate this!
  sections:
  - local: chapter0/1 # Do not change this!
    title: Introduction # Translate this!

🚨 Make sure the _toctree.yml file only contains the sections that have been translated! Otherwise you won't be able to build the content on the website or locally (see below how).

Once you have translated the _toctree.yml file, you can start translating the MDX files associated with your chapter.

🙋 If the _toctree.yml file doesn't yet exist for your language, you can simply create one by copy-pasting from the English version and deleting the sections that aren't related to your chapter. Just make sure it exists in the chapters/LANG-ID/ directory!

👷‍♂️ Build the course locally

Once you're happy with your changes, you can preview how they'll look by first installing the doc-builder tool that we use for building all documentation at Hugging Face:

pip install hf-doc-builder
doc-builder preview course ../course/chapters/LANG-ID --not_python_module

**preview command does not work with Windows.

This will build and render the course on http://localhost:3000/. Although the content looks much nicer on the Hugging Face website, this step will still allow you to check that everything is formatted correctly.

🚀 Submit a pull request

If the translations look good locally, the final step is to prepare the content for a pull request. Here, the first think to check is that the files are formatted correctly. For that you can run:

pip install -r requirements.txt
make style

Once that's run, commit any changes, open a pull request, and tag @lewtun for a review. Congratulations, you've now completed your first translation 🥳!

🚨 To build the course on the website, double-check your language code exists in languages field of the build_documentation.yml and build_pr_documentation.yml files in the .github folder. If not, just add them in their alphabetical order.

📔 Jupyter notebooks

The Jupyter notebooks containing all the code from the course are hosted on the huggingface/notebooks repo. If you wish to generate them locally, first install the required dependencies:

python -m pip install -r requirements.txt

Then run the following script:

python utils/generate_notebooks.py --output_dir nbs

This script extracts all the code snippets from the chapters and stores them as notebooks in the nbs folder (which is ignored by Git by default).

✍️ Contributing a new chapter

Note: we are not currently accepting community contributions for new chapters. These instructions are for the Hugging Face authors.

Adding a new chapter to the course is quite simple:

  1. Create a new directory under chapters/en/chapterX, where chapterX is the chapter you'd like to add.
  2. Add numbered MDX files sectionX.mdx for each section. If you need to include images, place them in the huggingface-course/documentation-images repository and use the HTML Images Syntax with the path https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/{langY}/{chapterX}/{your-image.png}.
  3. Update the _toctree.yml file to include your chapter sections -- this information will render the table of contents on the website. If your section involves both the PyTorch and TensorFlow APIs of transformers, make sure you include links to both Colabs in the colab field.

If you get stuck, check out one of the existing chapters -- this will often show you the expected syntax.

Once you are happy with the content, open a pull request and tag @lewtun for a review. We recommend adding the first chapter draft as a single pull request -- the team will then provide feedback internally to iterate on the content 🤗!

🙌 Acknowledgements

The structure of this repo and README are inspired by the wonderful Advanced NLP with spaCy course.

More Repositories

1

transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Python
125,320
star
2

pytorch-image-models

PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
Python
28,073
star
3

diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Python
22,776
star
4

datasets

🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
Python
17,530
star
5

peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Python
13,148
star
6

candle

Minimalist ML framework for Rust
Rust
12,686
star
7

tokenizers

💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
Rust
8,286
star
8

trl

Train transformer language models with reinforcement learning.
Python
8,181
star
9

text-generation-inference

Large Language Model Text Generation Inference
Python
7,240
star
10

accelerate

🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Python
7,008
star
11

chat-ui

Open source codebase powering the HuggingChat app
TypeScript
5,586
star
12

deep-rl-class

This repo contains the syllabus of the Hugging Face Deep Reinforcement Learning Course.
MDX
3,541
star
13

alignment-handbook

Robust recipes to align language models with human and AI preferences
Python
3,485
star
14

autotrain-advanced

🤗 AutoTrain Advanced
Python
3,283
star
15

diffusion-models-class

Materials for the Hugging Face Diffusion Models Course
Jupyter Notebook
3,126
star
16

notebooks

Notebooks using the Hugging Face libraries 🤗
Jupyter Notebook
3,114
star
17

distil-whisper

Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.
Python
2,964
star
18

neuralcoref

✨Fast Coreference Resolution in spaCy with Neural Networks
C
2,806
star
19

knockknock

🚪✊Knock Knock: Get notified when your training ends with only two additional lines of code
Python
2,682
star
20

swift-coreml-diffusers

Swift app demonstrating Core ML Stable Diffusion
Swift
2,377
star
21

safetensors

Simple, safe way to store and distribute tensors
Python
2,347
star
22

optimum

🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
Python
2,086
star
23

awesome-papers

Papers & presentation materials from Hugging Face's internal science day
1,996
star
24

blog

Public repo for HF blog posts
Jupyter Notebook
1,962
star
25

setfit

Efficient few-shot learning with Sentence Transformers
Jupyter Notebook
1,912
star
26

text-embeddings-inference

A blazing fast inference solution for text embeddings models
Rust
1,845
star
27

evaluate

🤗 Evaluate: A library for easily evaluating machine learning models and datasets.
Python
1,825
star
28

transfer-learning-conv-ai

🦄 State-of-the-Art Conversational AI with Transfer Learning
Python
1,654
star
29

swift-coreml-transformers

Swift Core ML 3 implementations of GPT-2, DistilGPT-2, BERT, and DistilBERT for Question answering. Other Transformers coming soon!
Swift
1,543
star
30

pytorch-openai-transformer-lm

🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI
Python
1,464
star
31

cookbook

Open-source AI cookbook
Jupyter Notebook
1,357
star
32

huggingface_hub

All the open source things related to the Hugging Face Hub.
Python
1,311
star
33

Mongoku

🔥The Web-scale GUI for MongoDB
TypeScript
1,289
star
34

huggingface.js

Utilities to use the Hugging Face Hub API
TypeScript
1,193
star
35

hmtl

🌊HMTL: Hierarchical Multi-Task Learning - A State-of-the-Art neural network model for several NLP tasks based on PyTorch and AllenNLP
Python
1,185
star
36

gsplat.js

JavaScript Gaussian Splatting library.
TypeScript
1,114
star
37

llm-vscode

LLM powered development for VSCode
TypeScript
1,060
star
38

datatrove

Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
Python
1,033
star
39

pytorch-pretrained-BigGAN

🦋A PyTorch implementation of BigGAN with pretrained weights and conversion scripts.
Python
986
star
40

torchMoji

😇A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc
Python
880
star
41

naacl_transfer_learning_tutorial

Repository of code for the tutorial on Transfer Learning in NLP held at NAACL 2019 in Minneapolis, MN, USA
Python
718
star
42

awesome-huggingface

🤗 A list of wonderful open-source projects & applications integrated with Hugging Face libraries.
698
star
43

optimum-nvidia

Python
680
star
44

nanotron

Minimalistic large language model 3D-parallelism training
Python
661
star
45

dataset-viewer

Lightweight web API for visualizing and exploring any dataset - computer vision, speech, text, and tabular - stored on the Hugging Face Hub
Python
614
star
46

transformers-bloom-inference

Fast Inference Solutions for BLOOM
Python
546
star
47

pytorch_block_sparse

Fast Block Sparse Matrices for Pytorch
C++
523
star
48

exporters

Export Hugging Face models to Core ML and TensorFlow Lite
Python
518
star
49

llm.nvim

LLM powered development for Neovim
Lua
507
star
50

swift-transformers

Swift Package to implement a transformers-like API in Swift
Swift
482
star
51

node-question-answering

Fast and production-ready question answering in Node.js
TypeScript
459
star
52

large_language_model_training_playbook

An open collection of implementation tips, tricks and resources for training large language models
Python
431
star
53

llm-ls

LSP server leveraging LLMs for code completion (and more?)
Rust
416
star
54

llm_training_handbook

An open collection of methodologies to help with successful training of large language models.
Python
385
star
55

swift-chat

Mac app to demonstrate swift-transformers
Swift
375
star
56

tflite-android-transformers

DistilBERT / GPT-2 for on-device inference thanks to TensorFlow Lite with Android demo apps
Java
368
star
57

community-events

Place where folks can contribute to 🤗 community events
Jupyter Notebook
368
star
58

nn_pruning

Prune a model while finetuning or training.
Jupyter Notebook
360
star
59

text-clustering

Easily embed, cluster and semantically label text datasets
Python
335
star
60

speechbox

Python
328
star
61

100-times-faster-nlp

🚀100 Times Faster Natural Language Processing in Python - iPython notebook
HTML
325
star
62

education-toolkit

Educational materials for universities
Jupyter Notebook
307
star
63

controlnet_aux

Python
306
star
64

optimum-intel

🤗 Optimum Intel: Accelerate inference with Intel optimization tools
Jupyter Notebook
295
star
65

datablations

Scaling Data-Constrained Language Models
Jupyter Notebook
293
star
66

unity-api

C#
284
star
67

open-muse

Open reproduction of MUSE for fast text2image generation.
Python
284
star
68

audio-transformers-course

The Hugging Face Course on Transformers for Audio
MDX
247
star
69

hub-docs

Docs of the Hugging Face Hub
221
star
70

lighteval

LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron.
Python
208
star
71

quanto

A pytorch Quantization Toolkit
Python
201
star
72

simulate

🎢 Creating and sharing simulation environments for embodied and synthetic data research
Python
185
star
73

ratchet

A cross-platform browser ML framework.
Rust
184
star
74

optimum-benchmark

A unified multi-backend utility for benchmarking Transformers, Timm, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
Python
183
star
75

hf_transfer

Rust
181
star
76

olm-datasets

Pipeline for pulling and processing online language model pretraining data from the web
Python
169
star
77

instruction-tuned-sd

Code for instruction-tuning Stable Diffusion.
Python
167
star
78

optimum-neuron

Easy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.
Jupyter Notebook
163
star
79

llm-swarm

Manage scalable open LLM inference endpoints in Slurm clusters
Python
156
star
80

OBELICS

Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M documents, 115B text tokens and 353M images.
Python
147
star
81

workshops

Materials for workshops on the Hugging Face ecosystem
Jupyter Notebook
146
star
82

cosmopedia

Python
138
star
83

api-inference-community

Python
131
star
84

diffusion-fast

Faster generation with text-to-image diffusion models.
Python
127
star
85

diarizers

Python
106
star
86

optimum-habana

Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
Python
106
star
87

sharp-transformers

A Unity plugin for using Transformers models in Unity.
C#
104
star
88

competitions

Python
101
star
89

hf-hub

Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package
Rust
93
star
90

olm-training

Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.
Python
87
star
91

fuego

[WIP] A 🔥 interface for running code in the cloud
Python
84
star
92

tune

Python
83
star
93

datasets-viewer

Viewer for the 🤗 datasets library.
Python
82
star
94

optimum-graphcore

Blazing fast training of 🤗 Transformers on Graphcore IPUs
Python
78
star
95

frp

FRP Fork
Go
73
star
96

paper-style-guide

71
star
97

block_movement_pruning

Block Sparse movement pruning
Python
70
star
98

amused

Python
68
star
99

doc-builder

The package used to build the documentation of our Hugging Face repos
Python
67
star
100

data-measurements-tool

Developing tools to automatically analyze datasets
Python
67
star