• Stars
    star
    1,654
  • Rank 27,720 (Top 0.6 %)
  • Language
    Python
  • License
    MIT License
  • Created about 5 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

πŸ¦„ State-of-the-Art Conversational AI with Transfer Learning

πŸ¦„ Building a State-of-the-Art Conversational AI with Transfer Learning

The present repo contains the code accompanying the blog post πŸ¦„ How to build a State-of-the-Art Conversational AI with Transfer Learning.

This code is a clean and commented code base with training and testing scripts that can be used to train a dialog agent leveraging transfer Learning from an OpenAI GPT and GPT-2 Transformer language model.

This codebase can be used to reproduce the results of HuggingFace's participation to NeurIPS 2018 dialog competition ConvAI2 which was state-of-the-art on the automatic metrics. The 3k+ lines of competition code was distilled in about 250 lines of training code with distributed & FP16 options to form the present repository.

This model can be trained in about one hour on a 8 V100 cloud instance (currently costs about $25) and a pre-trained model is also made available.

Installation

To install and use the training and inference scripts please clone the repo and install the requirements:

git clone https://github.com/huggingface/transfer-learning-conv-ai
cd transfer-learning-conv-ai
pip install -r requirements.txt
python -m spacy download en

Installation with Docker

To install using docker please build the self-contained image:

docker build -t convai .

Note: Make sure your Docker setup allocates enough memory to building the container. Building with the default of 1.75GB will fail due to large Pytorch wheel.

You can then enter the image

ip-192-168-22-157:transfer-learning-conv-ai loretoparisi$ docker run --rm -it convai bash
root@91e241bb823e:/# ls
Dockerfile  README.md  boot                  dev  home         lib    media  models  proc              root  sbin  sys  train.py  utils.py
LICENCE     bin        convai_evaluation.py  etc  interact.py  lib64  mnt    opt     requirements.txt  run   srv   tmp  usr       var

You can then run the interact.py script on the pretrained model:

python3 interact.py --model models/

Pretrained model

We make a pretrained and fine-tuned model available on our S3 here. The easiest way to download and use this model is just to run the interact.py script to talk with the model. Without any argument, this script will automatically download and cache our model.

Using the training script

The training script can be used in single GPU or multi GPU settings:

python ./train.py  # Single GPU training
python -m torch.distributed.launch --nproc_per_node=8 ./train.py  # Training on 8 GPUs

The training script accept several arguments to tweak the training:

Argument Type Default value Description
dataset_path str "" Path or url of the dataset. If empty download from S3.
dataset_cache str './dataset_cache.bin' Path or url of the dataset cache
model str "openai-gpt" Path, url or short name of the model
num_candidates int 2 Number of candidates for training
max_history int 2 Number of previous exchanges to keep in history
train_batch_size int 4 Batch size for training
valid_batch_size int 4 Batch size for validation
gradient_accumulation_steps int 8 Accumulate gradients on several steps
lr float 6.25e-5 Learning rate
lm_coef float 1.0 LM loss coefficient
mc_coef float 1.0 Multiple-choice loss coefficient
max_norm float 1.0 Clipping gradient norm
n_epochs int 3 Number of training epochs
personality_permutations int 1 Number of permutations of personality sentences
device str "cuda" if torch.cuda.is_available() else "cpu" Device (cuda or cpu)
fp16 str "" Set to O0, O1, O2 or O3 for fp16 training (see apex documentation)
local_rank int -1 Local rank for distributed training (-1: not distributed)

Here is how to reproduce our results on a server with 8 V100 GPUs (adapt number of nodes and batch sizes to your configuration):

python -m torch.distributed.launch --nproc_per_node=8 ./train.py --gradient_accumulation_steps=4 --lm_coef=2.0 --max_history=2 --n_epochs=1 --num_candidates=4 --personality_permutations=2 --train_batch_size=2 --valid_batch_size=2

This model should give a Hits@1 over 79, perplexity of 20.5 and F1 of 16.5 using the convai2 evaluation script (see below).

These numbers are slightly lower than the number we obtained in the ConvAI2 competition. Here is what you can tweak to reach the same results:

  • in the ConvAI2 competition we also used tweaked position emebddings so that the history of the dialog always start at with the same embeddings. This is easy to add with pytorch-transformers and should improve the hits@1 metric.
  • in the ConvAI2 competition we used a beam search decoder. While the results are better in term of f1 metric, our feeling is that the human experience is less compelling with beam search versus the nucleus sampling detector which is provided in the present repository.

Using the interaction script

The training script saves all the experiments and checkpoints in a sub-folder named with the timestamp of the experiment in the ./runs folder of the repository base folder.

You can then use the interactive script to interact with the model simply by pointing to this folder.

Here is an example command line to run the interactive script:

python ./interact.py --model_checkpoint ./data/Apr17_13-31-38_thunder/  # run the interactive script with a training checkpoint
python ./interact.py  # run the interactive script with the finetuned model on our S3

The fine-tuned model will gives FINAL Hits@1: 0.715

The interactive script accept a few arguments to tweak the decoding algorithm:

Argument Type Default value Description
dataset_path str "" Path or url of the dataset. If empty download from S3.
dataset_cache str './dataset_cache.bin' Path or url of the dataset cache
model str "openai-gpt" Path, url or short name of the model
max_history int 2 Number of previous utterances to keep in history
device str cuda if torch.cuda.is_available() else cpu Device (cuda or cpu)
no_sample action store_true Set to use greedy decoding instead of sampling
max_length int 20 Maximum length of the output utterances
min_length int 1 Minimum length of the output utterances
seed int 42 Seed
temperature int 0.7 Sampling softmax temperature
top_k int 0 Filter top-k tokens before sampling (<=0: no filtering)
top_p float 0.9 Nucleus filtering (top-p) before sampling (<=0.0: no filtering)

Running ConvAI2 evaluation scripts

To run the evaluation scripts of the ConvAI2 challenge, you first need to install ParlAI in the repo base folder like this:

git clone https://github.com/facebookresearch/ParlAI.git
cd ParlAI
python setup.py develop

You can then run the evaluation script from ParlAI base folder:

cd ParlAI
python ../convai_evaluation.py --eval_type hits@1  # to download and evaluate our fine-tuned model on hits@1 metric
python ../convai_evaluation.py --eval_type hits@1  --model_checkpoint ./data/Apr17_13-31-38_thunder/  # to evaluate a training checkpoint on hits@1 metric

The evaluation script accept a few arguments to select the evaluation metric and tweak the decoding algorithm:

Argument Type Default value Description
eval_type str "hits@1" Evaluate the model on hits@1, ppl or f1 metric on the ConvAI2 validation dataset
model str "openai-gpt" Path, url or short name of the model
max_history int 2 Number of previous utterances to keep in history
device str cuda if torch.cuda.is_available() else cpu Device (cuda or cpu)
no_sample action store_true Set to use greedy decoding instead of sampling
max_length int 20 Maximum length of the output utterances
min_length int 1 Minimum length of the output utterances
seed int 42 Seed
temperature int 0.7 Sampling softmax temperature
top_k int 0 Filter top-k tokens before sampling (<=0: no filtering)
top_p float 0.9 Nucleus filtering (top-p) before sampling (<=0.0: no filtering)

Data Format

see example_entry.py, and the comment at the top.

Citation

If you use this code in your research, you can cite our NeurIPS CAI workshop paper:

@article{DBLP:journals/corr/abs-1901-08149,
  author    = {Thomas Wolf and
               Victor Sanh and
               Julien Chaumond and
               Clement Delangue},
  title     = {TransferTransfo: {A} Transfer Learning Approach for Neural Network
               Based Conversational Agents},
  journal   = {CoRR},
  volume    = {abs/1901.08149},
  year      = {2019},
  url       = {http://arxiv.org/abs/1901.08149},
  archivePrefix = {arXiv},
  eprint    = {1901.08149},
  timestamp = {Sat, 02 Feb 2019 16:56:00 +0100},
  biburl    = {https://dblp.org/rec/bib/journals/corr/abs-1901-08149},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

More Repositories

1

transformers

πŸ€— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Python
128,386
star
2

pytorch-image-models

PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
Python
28,073
star
3

diffusers

πŸ€— Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Python
23,394
star
4

datasets

πŸ€— The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
Python
17,530
star
5

peft

πŸ€— PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Python
14,585
star
6

candle

Minimalist ML framework for Rust
Rust
14,110
star
7

tokenizers

πŸ’₯ Fast State-of-the-Art Tokenizers optimized for Research and Production
Rust
8,645
star
8

trl

Train transformer language models with reinforcement learning.
Python
8,483
star
9

text-generation-inference

Large Language Model Text Generation Inference
Python
8,197
star
10

accelerate

πŸš€ A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Python
7,306
star
11

chat-ui

Open source codebase powering the HuggingChat app
TypeScript
6,584
star
12

lerobot

πŸ€— LeRobot: End-to-end Learning for Real-World Robotics in Pytorch
Python
4,284
star
13

alignment-handbook

Robust recipes to align language models with human and AI preferences
Python
4,118
star
14

deep-rl-class

This repo contains the syllabus of the Hugging Face Deep Reinforcement Learning Course.
MDX
3,680
star
15

notebooks

Notebooks using the Hugging Face libraries πŸ€—
Jupyter Notebook
3,329
star
16

distil-whisper

Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.
Python
3,286
star
17

autotrain-advanced

πŸ€— AutoTrain Advanced
Python
3,283
star
18

diffusion-models-class

Materials for the Hugging Face Diffusion Models Course
Jupyter Notebook
3,280
star
19

neuralcoref

✨Fast Coreference Resolution in spaCy with Neural Networks
C
2,819
star
20

parler-tts

Inference and training library for high-quality TTS models.
Python
2,735
star
21

knockknock

πŸšͺ✊Knock Knock: Get notified when your training ends with only two additional lines of code
Python
2,682
star
22

safetensors

Simple, safe way to store and distribute tensors
Python
2,572
star
23

swift-coreml-diffusers

Swift app demonstrating Core ML Stable Diffusion
Swift
2,406
star
24

optimum

πŸš€ Accelerate training and inference of πŸ€— Transformers and πŸ€— Diffusers with easy to use hardware optimization tools
Python
2,290
star
25

text-embeddings-inference

A blazing fast inference solution for text embeddings models
Rust
2,201
star
26

blog

Public repo for HF blog posts
Jupyter Notebook
2,136
star
27

setfit

Efficient few-shot learning with Sentence Transformers
Jupyter Notebook
2,060
star
28

course

The Hugging Face course on Transformers
MDX
2,005
star
29

awesome-papers

Papers & presentation materials from Hugging Face's internal science day
1,996
star
30

evaluate

πŸ€— Evaluate: A library for easily evaluating machine learning models and datasets.
Python
1,825
star
31

datatrove

Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
Python
1,657
star
32

swift-coreml-transformers

Swift Core ML 3 implementations of GPT-2, DistilGPT-2, BERT, and DistilBERT for Question answering. Other Transformers coming soon!
Swift
1,543
star
33

pytorch-openai-transformer-lm

πŸ₯A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI
Python
1,464
star
34

cookbook

Open-source AI cookbook
Jupyter Notebook
1,416
star
35

huggingface_hub

All the open source things related to the Hugging Face Hub.
Python
1,311
star
36

Mongoku

πŸ”₯The Web-scale GUI for MongoDB
TypeScript
1,300
star
37

huggingface.js

Utilities to use the Hugging Face Hub API
TypeScript
1,277
star
38

gsplat.js

JavaScript Gaussian Splatting library.
TypeScript
1,233
star
39

hmtl

🌊HMTL: Hierarchical Multi-Task Learning - A State-of-the-Art neural network model for several NLP tasks based on PyTorch and AllenNLP
Python
1,185
star
40

llm-vscode

LLM powered development for VSCode
TypeScript
1,160
star
41

pytorch-pretrained-BigGAN

πŸ¦‹A PyTorch implementation of BigGAN with pretrained weights and conversion scripts.
Python
986
star
42

nanotron

Minimalistic large language model 3D-parallelism training
Python
897
star
43

torchMoji

πŸ˜‡A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc
Python
880
star
44

optimum-nvidia

Python
839
star
45

awesome-huggingface

πŸ€— A list of wonderful open-source projects & applications integrated with Hugging Face libraries.
821
star
46

naacl_transfer_learning_tutorial

Repository of code for the tutorial on Transfer Learning in NLP held at NAACL 2019 in Minneapolis, MN, USA
Python
718
star
47

dataset-viewer

Lightweight web API for visualizing and exploring any dataset - computer vision, speech, text, and tabular - stored on the Hugging Face Hub
Python
640
star
48

optimum-quanto

A pytorch quantization backend for optimum
Python
620
star
49

llm.nvim

LLM powered development for Neovim
Lua
607
star
50

exporters

Export Hugging Face models to Core ML and TensorFlow Lite
Python
559
star
51

transformers-bloom-inference

Fast Inference Solutions for BLOOM
Python
551
star
52

swift-transformers

Swift Package to implement a transformers-like API in Swift
Swift
530
star
53

pytorch_block_sparse

Fast Block Sparse Matrices for Pytorch
C++
523
star
54

llm-ls

LSP server leveraging LLMs for code completion (and more?)
Rust
513
star
55

node-question-answering

Fast and production-ready question answering in Node.js
TypeScript
459
star
56

lighteval

LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron.
Python
442
star
57

large_language_model_training_playbook

An open collection of implementation tips, tricks and resources for training large language models
Python
441
star
58

ratchet

A cross-platform browser ML framework.
Rust
424
star
59

llm_training_handbook

An open collection of methodologies to help with successful training of large language models.
Python
416
star
60

swift-chat

Mac app to demonstrate swift-transformers
Swift
392
star
61

tflite-android-transformers

DistilBERT / GPT-2 for on-device inference thanks to TensorFlow Lite with Android demo apps
Java
368
star
62

community-events

Place where folks can contribute to πŸ€— community events
Jupyter Notebook
368
star
63

text-clustering

Easily embed, cluster and semantically label text datasets
Python
367
star
64

optimum-intel

πŸ€— Optimum Intel: Accelerate inference with Intel optimization tools
Jupyter Notebook
361
star
65

nn_pruning

Prune a model while finetuning or training.
Jupyter Notebook
360
star
66

speechbox

Python
339
star
67

controlnet_aux

Python
326
star
68

100-times-faster-nlp

πŸš€100 Times Faster Natural Language Processing in Python - iPython notebook
HTML
325
star
69

education-toolkit

Educational materials for universities
Jupyter Notebook
320
star
70

unity-api

C#
302
star
71

datablations

Scaling Data-Constrained Language Models
Jupyter Notebook
296
star
72

open-muse

Open reproduction of MUSE for fast text2image generation.
Python
293
star
73

cosmopedia

Python
285
star
74

audio-transformers-course

The Hugging Face Course on Transformers for Audio
MDX
279
star
75

hf_transfer

Rust
242
star
76

hub-docs

Docs of the Hugging Face Hub
221
star
77

optimum-benchmark

πŸ‹οΈ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
Python
217
star
78

dataspeech

Python
207
star
79

diarizers

Python
206
star
80

simulate

🎒 Creating and sharing simulation environments for embodied and synthetic data research
Python
185
star
81

instruction-tuned-sd

Code for instruction-tuning Stable Diffusion.
Python
181
star
82

optimum-neuron

Easy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.
Jupyter Notebook
176
star
83

llm-swarm

Manage scalable open LLM inference endpoints in Slurm clusters
Python
176
star
84

OBELICS

Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M documents, 115B text tokens and 353M images.
Python
170
star
85

olm-datasets

Pipeline for pulling and processing online language model pretraining data from the web
Python
170
star
86

data-is-better-together

Let's build better datasets, together!
Jupyter Notebook
162
star
87

diffusion-fast

Faster generation with text-to-image diffusion models.
Python
157
star
88

workshops

Materials for workshops on the Hugging Face ecosystem
Jupyter Notebook
146
star
89

api-inference-community

Python
145
star
90

jat

Distributed online training of a general multi-task Deep RL Agent
Python
136
star
91

chug

Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.
Python
136
star
92

sharp-transformers

A Unity plugin for using Transformers models in Unity.
C#
129
star
93

optimum-habana

Easy and lightning fast training of πŸ€— Transformers on Habana Gaudi processor (HPU)
Python
114
star
94

hf-hub

Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package
Rust
109
star
95

competitions

Python
104
star
96

frp

FRP Fork
Go
102
star
97

coreml-examples

Swift Core ML Examples
Swift
98
star
98

olm-training

Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.
Python
92
star
99

fuego

[WIP] A πŸ”₯ interface for running code in the cloud
Python
85
star
100

tune

Python
83
star