• Stars
    star
    486
  • Rank 90,527 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 1 year ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Easy and Efficient Finetuning of QLoRA LLMs. (Supported LLama, LLama2, bloom, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.
 

GitHub Repo stars GitHub Code License GitHub last commit GitHub pull request Python 3.9+ Code style: black

👋🤗🤗👋 Join our WeChat.

Efficient Finetuning of Quantized LLMs --- 低资源的大语言模型量化训练/部署方案

中文 | English

This is the repo for the Efficient Finetuning of Quantized LLMs project, which aims to build and share instruction-following Chinese baichuan-7b/LLaMA/Pythia/GLM model tuning methods which can be trained on a single Nvidia RTX-2080TI, multi-round chatbot which can be trained on a single Nvidia RTX-3090 with the context len 2048.

We uses bitsandbytes for quantization and is integrated with Huggingface's PEFT and transformers libraries.

News

  • [23/07/20] Now we support training the LLaMA-2 models in this repo. Try --model_name_or_path Llama-2-7b-hf argument to use the LLaMA-2 model.
  • [23/07/12] Now we support training the Baichuan-13B model in this repo. Try --model_name_or_path path_to_baichuan_model and --lora_target W_pack arguments to train the Baichuan-13B model.
  • [23/07/03] Now we support training the Falcon-7B/40B models in this repo. Try --model_name_or_path tiiuae/falcon-7b and --lora_target query_key_value arguments to use the Falcon model.
  • [23/06/25] We release the supervised finetune baichuan-7B model ( GaussianTech/baichuan-7b-sft ) and the corresponding training script.
  • [23/06/24] We release the supervised finetune llama-7B model (GaussianTech/llama-7b-sft ) and the corresponding training script.
  • [23/06/15] Now we support training the baichuan-7B model in this repo. Try --model_name_or_path baichuan-inc/baichuan-7B to use the baichuan-7B model.
  • [23/06/03] Now we support quantized training and inference (aka QLoRA). Try scripts/qlora_finetune/finetune_llama_guanaco7b.sh and set --bits 4/8 argument to work with quantized model.
  • [23/05/25] Now we support Lora training and inference. Try scripts/lora_finetune/lora-finetune_alpaca.sh to finetune the LLAMA model with Lora on the Alpaca dataset.
  • [23/05/20] Now we support full-parameter tuning and partial-parameter tuning. Try scripts/full_finetune/full-finetune_alpaca.sh to full finetune the LLAMA model on the Alpaca dataset.

Supported Models

Supported Training Approaches

  • (Continually) pre-training
    • Full-parameter tuning
    • Partial-parameter tuning
    • LoRA
    • QLoRA
  • Supervised fine-tuning
    • Full-parameter tuning
    • Partial-parameter tuning
    • LoRA
    • QLoRA

Supported Datasets

As of now, we support the following datasets, most of which are all available in the Hugging Face datasets library.

Please refer to data/README.md to learn how to use these datasets. If you want to explore more datasets, please refer to the awesome-instruction-datasets. As default, we use the Standford Alpaca dataset for training and evaluation.

Some datasets require confirmation before using them, so we recommend logging in with your Hugging Face account using these commands.

pip install --upgrade huggingface_hub
huggingface-cli login

Data Preprocessing

We provide a number of data preprocessing tools in the data folder. These tools are intended to be a starting point for further research and development.

Model Zoo

We provide a number of models in the Hugging Face model hub. These models are trained with QLoRA and can be used for inference and finetuning. We provide the following models:

Base Model Adapter Instruct Datasets Train Script Log Model on Huggingface
llama-7b FullFinetune - - -
llama-7b QLoRA openassistant-guanaco finetune_lamma7b wandb log GaussianTech/llama-7b-sft
llama-7b QLoRA OL-CC finetune_lamma7b
baichuan7b QLoRA openassistant-guanaco finetune_baichuan7b wandb log GaussianTech/baichuan-7b-sft
baichuan7b QLoRA OL-CC finetune_baichuan7b wandb log -

Installation

Requirement

  • CUDA >= 11.0

  • Python 3.8+ and PyTorch 1.13.1+

  • 🤗Transformers, Datasets, Accelerate, PEFT and bitsandbytes

  • jieba, rouge_chinese and nltk (used at evaluation)

  • gradio (used in gradio_webserver.py)

Install required packages

To load models in 4bits with transformers and bitsandbytes, you have to install accelerate and transformers from source and make sure you have the latest version of the bitsandbytes library (0.39.0). You can achieve the above with the following commands:

pip install -q -U bitsandbytes
pip install -q -U git+https://github.com/huggingface/transformers.git
pip install -q -U git+https://github.com/huggingface/peft.git
pip install -q -U git+https://github.com/huggingface/accelerate.git

Clone the code

Clone this repository and navigate to the Efficient-Tuning-LLMs folder

git clone https://github.com/jianzhnie/Efficient-Tuning-LLMs.git
cd Efficient-Tuning-LLMs

Getting Started

main function Useage Scripts
train.py Full finetune LLMs on SFT datasets full_finetune
train_lora.py Finetune LLMs by using Lora (Low-Rank Adaptation of Large Language Models finetune) lora_finetune
train_qlora.py Finetune LLMs by using QLora (QLoRA: Efficient Finetuning of Quantized LLMs) qlora_finetune

QLora int4 Finetune

The train_qlora.py code is a starting point for finetuning and inference on various datasets. Basic command for finetuning a baseline model on the Alpaca dataset:

python train_qlora.py --model_name_or_path <path_or_name>

For models larger than 13B, we recommend adjusting the learning rate:

python train_qlora.py –learning_rate 0.0001 --model_name_or_path <path_or_name>

We can also tweak our hyperparameters:

python train_qlora.py \
    --model_name_or_path ~/checkpoints/baichuan7b \
    --dataset_cfg ./data/alpaca_zh_pcyn.yaml \
    --output_dir ./work_dir/oasst1-baichuan-7b \
    --num_train_epochs 4 \
    --per_device_train_batch_size 4 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 8 \
    --evaluation_strategy steps \
    --eval_steps 50 \
    --save_strategy steps \
    --save_total_limit 5 \
    --save_steps 100 \
    --logging_strategy steps \
    --logging_steps 1 \
    --learning_rate 0.0002 \
    --warmup_ratio 0.03 \
    --weight_decay 0.0 \
    --lr_scheduler_type constant \
    --adam_beta2 0.999 \
    --max_grad_norm 0.3 \
    --max_new_tokens 32 \
    --source_max_len 512 \
    --target_max_len 512 \
    --lora_r 64 \
    --lora_alpha 16 \
    --lora_dropout 0.1 \
    --double_quant \
    --quant_type nf4 \
    --fp16 \
    --bits 4 \
    --gradient_checkpointing \
    --trust_remote_code \
    --do_train \
    --do_eval \
    --sample_generate \
    --data_seed 42 \
    --seed 0

To find more scripts for finetuning and inference, please refer to the scripts folder.

Quantization

Quantization parameters are controlled from the BitsandbytesConfig (see HF documenation) as follows:

  • Loading in 4 bits is activated through load_in_4bit
  • The datatype used for the linear layer computations with bnb_4bit_compute_dtype
  • Nested quantization is activated through bnb_4bit_use_double_quant
  • The datatype used for qunatization is specified with bnb_4bit_quant_type. Note that there are two supported quantization datatypes fp4 (four bit float) and nf4 (normal four bit float). The latter is theoretically optimal for normally distributed weights and we recommend using nf4.
    model = AutoModelForCausalLM.from_pretrained(
        model_name_or_path='/name/or/path/to/your/model',
        load_in_4bit=True,
        device_map='auto',
        max_memory=max_memory,
        torch_dtype=torch.bfloat16,
        quantization_config=BitsAndBytesConfig(
            load_in_4bit=True,
            bnb_4bit_compute_dtype=torch.bfloat16,
            bnb_4bit_use_double_quant=True,
            bnb_4bit_quant_type='nf4'
        ),
    )

Tutorials and Demonstrations

We provide two Google Colab notebooks to demonstrate the use of 4bit models in inference and fine-tuning. These notebooks are intended to be a starting point for further research and development.

  • Basic usage Google Colab notebook - This notebook shows how to use 4bit models in inference with all their variants, and how to run GPT-neo-X (a 20B parameter model) on a free Google Colab instance 🤯
  • Fine tuning Google Colab notebook - This notebook shows how to fine-tune a 4bit model on a downstream task using the Hugging Face ecosystem. We show that it is possible to fine tune GPT-neo-X 20B on a Google Colab instance!

Other examples are found under the examples/ folder.

  • Finetune LLama-7B (ex1)
  • Finetune GPT-neo-X 20B (ex2)

Using Local Datasets

You can specify the path to your dataset using the --dataset argument. If the --dataset_format argument is not set, it will default to the Alpaca format. Here are a few examples:

  • Training with an alpaca format dataset:
python train_qlora.py --dataset="path/to/your/dataset"
  • Training with a self-instruct format dataset:
python train_qlora.py --dataset="path/to/your/dataset" --dataset_format="self-instruct"

Multi GPU

Multi GPU training and inference work out-of-the-box with Hugging Face's Accelerate. Note that the per_device_train_batch_size and per_device_eval_batch_size arguments are global batch sizes unlike what their name suggest.

When loading a model for training or inference on multiple GPUs you should pass something like the following to AutoModelForCausalLM.from_pretrained():

device_map = "auto"
max_memory = {i: '46000MB' for i in range(torch.cuda.device_count())}

Inference

终端交互式对话

运行下面的脚本,程序会在命令行中和你的ChatBot进行交互式的对话,在命令行中输入指示并回车即可生成回复,输入 clear 可以清空对话历史,输入 stop 终止程序。

python cli_demo.py \
    --model_name_or_path ~/checkpoints/baichuan7b \ # base model
    --checkpoint_dir ./work_dir/checkpoint-700  \ # 训练的模型权重
    --trust_remote_code  \
    --double_quant \
    --quant_type nf4 \
    --fp16 \
    --bits 4

使用 Gradio 进行网页端交互

This file reads the foundation model from the Hugging Face model hub and the LoRA weights from path/to/your/model_dir, and runs a Gradio interface for inference on a specified input. Users should treat this as example code for the use of the model, and modify it as needed.

Example usage:

python gradio_webserver.py \
    --model_name_or_path decapoda-research/llama-7b-hf \
    --lora_model_name_or_path  `path/to/your/model_dir`

Sample Outputs

We provide generations for the models described in the paper for both OA and Vicuna queries in the eval/generations folder. These are intended to foster further research on model evaluation and analysis.

Can you distinguish ChatGPT from Guanaco? Give it a try! You can access the model response Colab here comparing ChatGPT and Guanaco 65B on Vicuna prompts.

Known Issues and Limitations

Here a list of known issues and bugs. If your issue is not reported here, please open a new issue and describe the problem.

  1. 4-bit inference is slow. Currently, our 4-bit inference implementation is not yet integrated with the 4-bit matrix multiplication
  2. Resuming a LoRA training run with the Trainer currently runs on an error
  3. Currently, using bnb_4bit_compute_type='fp16' can lead to instabilities. For 7B LLaMA, only 80% of finetuning runs complete without error. We have solutions, but they are not integrated yet into bitsandbytes.
  4. Make sure that tokenizer.bos_token_id = 1 to avoid generation issues.

License

Efficient Finetuning of Quantized LLMs is released under the Apache 2.0 license.

Acknowledgements

We thank the Huggingface team, in particular Younes Belkada, for their support integrating QLoRA with PEFT and transformers libraries.

We appreciate the work by many open-source contributors, especially:

Citation

Please cite the repo if you use the data or code in this repo.

@misc{Chinese-Guanaco,
  author = {jianzhnie},
  title = {Chinese-Guanaco: Efficient Finetuning of Quantized LLMs for Chinese},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/jianzhnie/Efficient-Tuning-LLMs}},
}

More Repositories

1

awesome-text-to-video

A Survey on Text-to-Video Generation/Synthesis.
299
star
2

awesome-instruction-datasets

A collection of awesome-prompt-datasets, awesome-instruction-dataset, to train ChatLLM such as chatgpt 收录各种各样的指令数据集, 用于训练 ChatLLM 模型。
200
star
3

open-chatgpt

The open source implementation of ChatGPT, Alpaca, Vicuna and RLHF Pipeline. 从0开始实现一个ChatGPT.
Python
157
star
4

AutoTabular

Automatic machine learning for tabular data. ⚡🔥⚡
Python
64
star
5

GigaGAN

Implementation of GigaGAN in pytorch
Python
53
star
6

deep-marl-toolkit

MARLToolkit: The Multi-Agent Rainforcement Learning Toolkit. Include implementation of MAPPO, MADDPG, QMIX, VDN, COMA, IPPO, QTRAN, MAT...
Python
33
star
7

GroupNorm-MXNet

This is the re-implementation of group normalization in MXNet Symbol,Module and Gluon
Python
22
star
8

pyramidbox_pytorch

pytorch实现的Pyramidbox 人脸检测模型, 对原来代码的部分模块进行了修改,更简洁高效
Python
20
star
9

RLToolkit

RLToolkit is a flexible and high-efficient reinforcement learning framework. Include implementation of DQN, AC, ACER, A2C, A3C, PG, DDPG, TRPO, PPO, SAC, TD3 and ....
Python
16
star
10

RFBNet_Pytorch

RFBNet in Pytorch
Python
13
star
11

MultimodalTookit

Incorporate Image, Text and Tabular Data with HuggingFace Transformers
Python
10
star
12

TsFormer

TsFormer is a toolbox that implement transformer models on Time series model
Python
8
star
13

AutoTimm

Auto torch image models: train and evaluation
Python
7
star
14

S3FD_pytorch

pytorch 实现的S3FD,对原来的代码进行了优化,更简洁高效
Python
7
star
15

awesome-open-chatgpt

Open efforts to implement ChatGPT-like models and beyond.
7
star
16

DSFD_pytorch

pytorch 实现的DSFD, 更高效更简洁
Python
6
star
17

MXNet-im2rec_tutorial

如何使用mxnet的im2rec函数制作自己的物体检测数据集
6
star
18

machine_learning_notes

工作学习笔记
Python
5
star
19

ssd_pytorch

pytorch 版本的SSD实现
Python
4
star
20

yolov3_pytorch

pytorch实现的yolov3, 对原来代码的数据读取模块进行了修改,更简洁高效, 修复了原来代码的bugs,支持Pytorch-1.1 更高的版本
Python
4
star
21

age_gender_estimation

age_gender_estimation
Python
4
star
22

awesome-chatgpt

Curated list of awesome tools, demos, docs for ChatGPT and GPT-3
3
star
23

RLZero

A clean and easy implementation of MuZero, AlphaZero and Self-Play reinforcement learning algorithms for any game.
Python
3
star
24

deep_head_pose

deep_head_pose in pytorch
Python
3
star
25

machine-learning-wiki

machine-learning-wiki, 专注于机器学习相关领域的知识汇总,技术收集,笔记记录.
3
star
26

NLP-Toolkit

NLPToolkit is a toolkit for NLP(Natural Language Processing) and LLM(Large Language Models) using Pytorch.
Python
3
star
27

ProteinTransformer

ProteinTransformer is a toolkit using deep learning for protein function annotation
Python
2
star
28

jianzhnie.github.io

Robin's Personal Site. Visit https://jianzhnie.github.io/
HTML
2
star
29

spark-ecosystem

spark-ecosystem
Scala
2
star
30

nlp-toolkit-old

nlp-toolkit
Python
2
star
31

learnc

用来学习 C ++ 编程项目
C++
2
star
32

jianzhnie

1
star
33

RefineDet_Pytorch

RefineDet in Pytorch
Python
1
star
34

self_supervised

self-supervised learning
Python
1
star
35

FaceBoxes

FaceBoxes in Pytorch
Python
1
star
36

pytorch-vit

pytorch-vit
1
star
37

DPSNet

Python
1
star
38

RetinaNet_Pytorch

RetinaNet in Pytorch
Python
1
star
39

retinaface_pytorch

A PyTorch implementation of RetinaFace
Python
1
star
40

CropAndResize_pytorch

CropAndResize in pytorch
Python
1
star
41

HssdRL

HssdRL is a handy and simple scaling of distributed reinforcement learning framework based on Python and PyTorch
Python
1
star