• Stars
    star
    718
  • Rank 62,852 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created over 5 years ago
  • Updated almost 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Repository of code for the tutorial on Transfer Learning in NLP held at NAACL 2019 in Minneapolis, MN, USA

Code repository accompanying NAACL 2019 tutorial on "Transfer Learning in Natural Language Processing"

The tutorial was given on June 2 at NAACL 2019 in Minneapolis, MN, USA by Sebastian Ruder, Matthew Peters, Swabha Swayamdipta and Thomas Wolf.

Here is the webpage of NAACL tutorials for more information.

The slides for the tutorial can be found here: https://tinyurl.com/NAACLTransfer.

A Google Colab notebook with all the code for the tutorial can be found here: https://tinyurl.com/NAACLTransferColab.

The present repository can also be accessed with the following short url: https://tinyurl.com/NAACLTransferCode

Abstract

The classic supervised machine learning paradigm is based on learning in isolation a single predictive model for a task using a single dataset. This approach requires a large number of training examples and performs best for well-defined and narrow tasks. Transfer learning refers to a set of methods that extend this approach by leveraging data from additional domains or tasks to train a model with better generalization properties.

Over the last two years, the field of Natural Language Processing (NLP) has witnessed the emergence of several transfer learning methods and architectures, which significantly improved upon the state-of-the-art on a wide range of NLP tasks.

These improvements together with the wide availability and ease of integration of these methods are reminiscent of the factors that led to the success of pretrained word embeddings and ImageNet pretraining in computer vision, and indicate that these methods will likely become a common tool in the NLP landscape as well as an important research direction.

We will present an overview of modern transfer learning methods in NLP, how models are pre-trained, what information the representations they learn capture, and review examples and case studies on how these models can be integrated and adapted in downstream NLP tasks.

Overview

This codebase tries to present in the simplest and most compact way a few of the major Transfer Learning techniques, which have emerged over the past years. The code in this repository does not attempt to be state-of-the-art. However, effort has been made to achieve reasonable performance and with some modifications to be competitive with the current state of the art.

Special effort has been made to

  • ensure the present code can be used as easily as possible, in particular by hosting pretrained models and datasets;
  • keep the present codebase as compact and self-contained as possible to make it easy to manipulate and understand.

Currently the codebase comprises:

  • pretraining_model.py: a transformer model with a GPT-2-like architecture as the basic pretrained model;
  • pretraining_train.py: a pretraining script to train this model with a language modeling objective on a selection of large datasets (WikiText-103, SimpleBooks-92) using distributed training if available;
  • finetuning_model.py: several architectures based on the transformer model for fine-tuning (with a classification head on top, with adapters);
  • finetuning_train.py: a fine-tuning script to fine-tune these architectures on a classification task (IMDb).

Installation

To use this codebase, simply clone the Github repository and install the requirements like this:

git clone https://github.com/huggingface/naacl_transfer_learning_tutorial
cd naacl_transfer_learning_tutorial
pip install -r requirements.txt

Pre-training

To pre-train the transformer, run the pretraining_train.py script like this:

python ./pretraining_train.py

or using distributed training like this (for a 8 GPU server):

python -m torch.distributed.launch --nproc_per_node 8 ./pretraining_train.py

The pre-training script will:

  • download wikitext-103 for pre-training (default),
  • instantiate a 50M parameters transformer model and train it for 50 epochs,
  • log the experiements in Tensorboard and in a folder under ./runs,
  • save checkpoints in the log folder.

Pretraining to a validation perplexity of ~29 on WikiText-103 will take about 15h on 8 V100 GPUs (can be stopped earlier). If you are interested in SOTA, there are a few reasons the validation perplexity is a bit higher than the equivalent Transformer-XL perplexity (around 24). The main reason is the use of an open vocabulary (sub-words for Bert tokenizer) instead of a closed vocabulary (see this blog post by Sebastian Mielke for some explanation)

Various pre-training options are available, you can list them with:

python ./pretraining_train.py --help

Fine-tuning

To fine-tune the pre-trained transformer, run the finetuning_train.py script like this:

python ./finetuning_train.py --model_checkpoint PATH-TO-YOUR-PRETRAINED-MODEL-FOLDER

PATH-TO-YOUR-PRETRAINED-MODEL-FOLDER can be for instance ./runs/May17_17-47-12_my_big_server

or using distributed training like this (for a 8 GPU server):

python -m torch.distributed.launch --nproc_per_node 8 ./finetuning_train.py  --model_checkpoint PATH-TO-YOUR-PRETRAINED-MODEL-FOLDER

Various fine-tuning options are available, you can list them with:

python ./finetuning_train.py --help

More Repositories

1

transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Python
132,772
star
2

pytorch-image-models

PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
Python
28,073
star
3

diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Python
25,010
star
4

datasets

🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
Python
17,530
star
5

peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Python
15,663
star
6

candle

Minimalist ML framework for Rust
Rust
15,011
star
7

trl

Train transformer language models with reinforcement learning.
Python
9,278
star
8

tokenizers

💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
Rust
8,885
star
9

text-generation-inference

Large Language Model Text Generation Inference
Python
8,663
star
10

accelerate

🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Python
7,759
star
11

chat-ui

Open source codebase powering the HuggingChat app
TypeScript
7,113
star
12

lerobot

🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning
Python
6,522
star
13

alignment-handbook

Robust recipes to align language models with human and AI preferences
Python
4,474
star
14

parler-tts

Inference and training library for high-quality TTS models.
Python
4,027
star
15

deep-rl-class

This repo contains the syllabus of the Hugging Face Deep Reinforcement Learning Course.
MDX
3,680
star
16

autotrain-advanced

🤗 AutoTrain Advanced
Python
3,671
star
17

diffusion-models-class

Materials for the Hugging Face Diffusion Models Course
Jupyter Notebook
3,508
star
18

notebooks

Notebooks using the Hugging Face libraries 🤗
Jupyter Notebook
3,492
star
19

distil-whisper

Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.
Python
3,455
star
20

neuralcoref

✨Fast Coreference Resolution in spaCy with Neural Networks
C
2,842
star
21

safetensors

Simple, safe way to store and distribute tensors
Python
2,754
star
22

knockknock

🚪✊Knock Knock: Get notified when your training ends with only two additional lines of code
Python
2,682
star
23

text-embeddings-inference

A blazing fast inference solution for text embeddings models
Rust
2,668
star
24

speech-to-speech

Speech To Speech: an effort for an open-sourced and modular GPT4-o
Python
2,540
star
25

swift-coreml-diffusers

Swift app demonstrating Core ML Stable Diffusion
Swift
2,506
star
26

optimum

🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
Python
2,469
star
27

blog

Public repo for HF blog posts
Jupyter Notebook
2,303
star
28

setfit

Efficient few-shot learning with Sentence Transformers
Jupyter Notebook
2,142
star
29

course

The Hugging Face course on Transformers
MDX
2,005
star
30

awesome-papers

Papers & presentation materials from Hugging Face's internal science day
1,996
star
31

datatrove

Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
Python
1,909
star
32

evaluate

🤗 Evaluate: A library for easily evaluating machine learning models and datasets.
Python
1,825
star
33

transfer-learning-conv-ai

🦄 State-of-the-Art Conversational AI with Transfer Learning
Python
1,654
star
34

cookbook

Open-source AI cookbook
Jupyter Notebook
1,577
star
35

swift-coreml-transformers

Swift Core ML 3 implementations of GPT-2, DistilGPT-2, BERT, and DistilBERT for Question answering. Other Transformers coming soon!
Swift
1,543
star
36

pytorch-openai-transformer-lm

🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI
Python
1,464
star
37

huggingface.js

Utilities to use the Hugging Face Hub API
TypeScript
1,368
star
38

huggingface_hub

All the open source things related to the Hugging Face Hub.
Python
1,311
star
39

gsplat.js

JavaScript Gaussian Splatting library.
TypeScript
1,302
star
40

Mongoku

🔥The Web-scale GUI for MongoDB
TypeScript
1,300
star
41

llm-vscode

LLM powered development for VSCode
TypeScript
1,206
star
42

hmtl

🌊HMTL: Hierarchical Multi-Task Learning - A State-of-the-Art neural network model for several NLP tasks based on PyTorch and AllenNLP
Python
1,185
star
43

nanotron

Minimalistic large language model 3D-parallelism training
Python
1,071
star
44

pytorch-pretrained-BigGAN

🦋A PyTorch implementation of BigGAN with pretrained weights and conversion scripts.
Python
986
star
45

torchMoji

😇A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc
Python
880
star
46

optimum-nvidia

Python
863
star
47

awesome-huggingface

🤗 A list of wonderful open-source projects & applications integrated with Hugging Face libraries.
853
star
48

optimum-quanto

A pytorch quantization backend for optimum
Python
738
star
49

llm.nvim

LLM powered development for Neovim
Lua
728
star
50

dataset-viewer

Backend that powers the dataset viewer on Hugging Face dataset pages through a public API.
Python
689
star
51

swift-transformers

Swift Package to implement a transformers-like API in Swift
Swift
647
star
52

exporters

Export Hugging Face models to Core ML and TensorFlow Lite
Python
587
star
53

llm-ls

LSP server leveraging LLMs for code completion (and more?)
Rust
586
star
54

ratchet

A cross-platform browser ML framework.
Rust
574
star
55

transformers-bloom-inference

Fast Inference Solutions for BLOOM
Python
557
star
56

lighteval

LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron.
Python
554
star
57

pytorch_block_sparse

Fast Block Sparse Matrices for Pytorch
C++
523
star
58

node-question-answering

Fast and production-ready question answering in Node.js
TypeScript
459
star
59

large_language_model_training_playbook

An open collection of implementation tips, tricks and resources for training large language models
Python
452
star
60

swift-chat

Mac app to demonstrate swift-transformers
Swift
444
star
61

llm_training_handbook

An open collection of methodologies to help with successful training of large language models.
Python
437
star
62

text-clustering

Easily embed, cluster and semantically label text datasets
Python
422
star
63

cosmopedia

Python
416
star
64

optimum-intel

🤗 Optimum Intel: Accelerate inference with Intel optimization tools
Jupyter Notebook
379
star
65

tflite-android-transformers

DistilBERT / GPT-2 for on-device inference thanks to TensorFlow Lite with Android demo apps
Java
368
star
66

community-events

Place where folks can contribute to 🤗 community events
Jupyter Notebook
368
star
67

controlnet_aux

Python
365
star
68

nn_pruning

Prune a model while finetuning or training.
Jupyter Notebook
360
star
69

speechbox

Python
341
star
70

100-times-faster-nlp

🚀100 Times Faster Natural Language Processing in Python - iPython notebook
HTML
325
star
71

education-toolkit

Educational materials for universities
Jupyter Notebook
324
star
72

transformers.js-examples

A collection of 🤗 Transformers.js demos and example applications
JavaScript
323
star
73

open-muse

Open reproduction of MUSE for fast text2image generation.
Python
320
star
74

local-gemma

Gemma 2 optimized for your local machine.
Python
317
star
75

unity-api

C#
313
star
76

audio-transformers-course

The Hugging Face Course on Transformers for Audio
MDX
308
star
77

datablations

Scaling Data-Constrained Language Models
Jupyter Notebook
305
star
78

hf_transfer

Rust
287
star
79

dataspeech

Python
262
star
80

huggingface-llama-recipes

Jupyter Notebook
259
star
81

diarizers

Python
238
star
82

optimum-benchmark

🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
Python
226
star
83

hub-docs

Docs of the Hugging Face Hub
221
star
84

llm-swarm

Manage scalable open LLM inference endpoints in Slurm clusters
Python
216
star
85

sam2-studio

Swift
196
star
86

optimum-neuron

Easy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.
Jupyter Notebook
193
star
87

data-is-better-together

Let's build better datasets, together!
Jupyter Notebook
192
star
88

instruction-tuned-sd

Code for instruction-tuning Stable Diffusion.
Python
189
star
89

simulate

🎢 Creating and sharing simulation environments for embodied and synthetic data research
Python
185
star
90

OBELICS

Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M documents, 115B text tokens and 353M images.
Python
184
star
91

diffusion-fast

Faster generation with text-to-image diffusion models.
Python
179
star
92

olm-datasets

Pipeline for pulling and processing online language model pretraining data from the web
Python
173
star
93

api-inference-community

Python
161
star
94

jat

General multi-task deep RL Agent
Python
154
star
95

workshops

Materials for workshops on the Hugging Face ecosystem
Jupyter Notebook
148
star
96

coreml-examples

Swift Core ML Examples
Jupyter Notebook
147
star
97

chug

Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.
Python
140
star
98

optimum-habana

Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
Python
140
star
99

sharp-transformers

A Unity plugin for using Transformers models in Unity.
C#
139
star
100

hf-hub

Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package
Rust
132
star