• Stars
    star
    7,645
  • Rank 4,725 (Top 0.1 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created 11 months ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)

LLaMA Efficient Tuning

GitHub Repo stars GitHub Code License GitHub last commit GitHub pull request

👋 Join our WeChat.

Changelog

[23/06/22] Now we align the demo API with the OpenAI's format where you can insert the fine-tuned model in arbitrary ChatGPT-based applications.

[23/06/15] Now we support training the baichuan-7B model in this repo. Try --model_name_or_path baichuan-inc/baichuan-7B and --lora_target W_pack arguments to use the baichuan-7B model.

[23/06/03] Now we support quantized training and inference (aka QLoRA). Try --quantization_bit 4/8 argument to work with quantized model. (experimental feature)

[23/05/31] Now we support training the BLOOM & BLOOMZ models in this repo. Try --model_name_or_path bigscience/bloomz-7b1-mt and --lora_target query_key_value arguments to use the BLOOMZ model.

Supported Models

Supported Training Approaches

Provided Datasets

Please refer to data/README.md for details.

Some datasets require confirmation before using them, so we recommend logging in with your HuggingFace account using these commands.

pip install --upgrade huggingface_hub
huggingface-cli login

Requirement

  • Python 3.8+ and PyTorch 1.13.1+
  • 🤗Transformers, Datasets, Accelerate, PEFT and TRL
  • jieba, rouge_chinese and nltk (used at evaluation)
  • gradio and mdtex2html (used in web_demo.py)
  • uvicorn and fastapi (used in api_demo.py)

And powerful GPUs!

Getting Started

Data Preparation (optional)

Please refer to data/example_dataset for checking the details about the format of dataset files. You can either use a single .json file or a dataset loading script with multiple files to create a custom dataset.

Note: please update data/dataset_info.json to use your custom dataset. About the format of this file, please refer to data/README.md.

Dependence Installation (optional)

git clone https://github.com/hiyouga/LLaMA-Efficient-Tuning.git
conda create -n llama_etuning python=3.10
conda activate llama_etuning
cd LLaMA-Efficient-Tuning
pip install -r requirements.txt

LLaMA Weights Preparation (optional)

  1. Download the weights of the LLaMA models.
  2. Convert them to HF format using the following command.
python -m transformers.models.llama.convert_llama_weights_to_hf \
    --input_dir path_to_llama_weights --model_size 7B --output_dir path_to_llama_model

(Continually) Pre-Training

CUDA_VISIBLE_DEVICES=0 python src/train_pt.py \
    --model_name_or_path path_to_your_model \
    --do_train \
    --dataset wiki_demo \
    --finetuning_type lora \
    --output_dir path_to_pt_checkpoint \
    --overwrite_cache \
    --per_device_train_batch_size 4 \
    --gradient_accumulation_steps 4 \
    --lr_scheduler_type cosine \
    --logging_steps 10 \
    --save_steps 1000 \
    --learning_rate 5e-5 \
    --num_train_epochs 3.0 \
    --plot_loss \
    --fp16

Supervised Fine-Tuning

CUDA_VISIBLE_DEVICES=0 python src/train_sft.py \
    --model_name_or_path path_to_your_model \
    --do_train \
    --dataset alpaca_gpt4_en \
    --finetuning_type lora \
    --output_dir path_to_sft_checkpoint \
    --overwrite_cache \
    --per_device_train_batch_size 4 \
    --gradient_accumulation_steps 4 \
    --lr_scheduler_type cosine \
    --logging_steps 10 \
    --save_steps 1000 \
    --learning_rate 5e-5 \
    --num_train_epochs 3.0 \
    --plot_loss \
    --fp16

Reward Model Training

CUDA_VISIBLE_DEVICES=0 python src/train_rm.py \
    --model_name_or_path path_to_your_model \
    --do_train \
    --dataset comparison_gpt4_en \
    --finetuning_type lora \
    --output_dir path_to_rm_checkpoint \
    --per_device_train_batch_size 4 \
    --gradient_accumulation_steps 4 \
    --lr_scheduler_type cosine \
    --logging_steps 10 \
    --save_steps 1000 \
    --learning_rate 1e-5 \
    --num_train_epochs 1.0 \
    --plot_loss \
    --fp16

PPO Training (RLHF)

CUDA_VISIBLE_DEVICES=0 python src/train_ppo.py \
    --model_name_or_path path_to_your_model \
    --do_train \
    --dataset alpaca_gpt4_en \
    --finetuning_type lora \
    --checkpoint_dir path_to_sft_checkpoint \
    --reward_model path_to_rm_checkpoint \
    --output_dir path_to_ppo_checkpoint \
    --per_device_train_batch_size 2 \
    --gradient_accumulation_steps 4 \
    --lr_scheduler_type cosine \
    --logging_steps 10 \
    --save_steps 1000 \
    --learning_rate 1e-5 \
    --num_train_epochs 1.0 \
    --resume_lora_training False \
    --plot_loss

Distributed Training

accelerate config # configure the environment
accelerate launch src/train_XX.py # arguments (same as above)

Evaluation (BLEU and ROUGE_CHINESE)

CUDA_VISIBLE_DEVICES=0 python src/train_sft.py \
    --model_name_or_path path_to_your_model \
    --do_eval \
    --dataset alpaca_gpt4_en \
    --checkpoint_dir path_to_checkpoint \
    --output_dir path_to_eval_result \
    --per_device_eval_batch_size 8 \
    --max_samples 50 \
    --predict_with_generate

We recommend using --per_device_eval_batch_size=1 and --max_target_length 128 at 4/8-bit evaluation.

API / CLI / Web Demo

python src/xxx_demo.py \
    --model_name_or_path path_to_your_model \
    --checkpoint_dir path_to_checkpoint

Export model

python src/export_model.py \
    --model_name_or_path path_to_your_model \
    --checkpoint_dir path_to_checkpoint \
    --output_dir path_to_export

License

This repository is licensed under the Apache-2.0 License.

Please follow the Model Card to use the LLaMA models.

Please follow the RAIL License to use the BLOOM & BLOOMZ models.

Please follow the baichuan-7B License to use the baichuan-7B model.

Citation

If this work is helpful, please cite as:

@Misc{llama-efficient-tuning,
  title = {LLaMA Efficient Tuning},
  author = {hiyouga},
  howpublished = {\url{https://github.com/hiyouga/LLaMA-Efficient-Tuning}},
  year = {2023}
}

Acknowledgement

This repo is a sibling of ChatGLM-Efficient-Tuning. They share a similar code structure of efficient tuning on large language models.

More Repositories

1

ChatGLM-Efficient-Tuning

Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
Python
3,349
star
2

Dual-Contrastive-Learning

Code for our paper "Dual Contrastive Learning: Text Classification via Label-Aware Data Augmentation"
Python
111
star
3

PBAN-PyTorch

A Position-aware Bidirectional Attention Network for Aspect-level Sentiment Analysis, PyTorch implementation.
Python
35
star
4

AMP-Regularizer

Code for our paper "Regularizing Neural Networks via Adversarial Model Perturbation", CVPR2021
Python
31
star
5

FastEdit

⚡🩹 Editing large language models within 10 seconds
Python
31
star
6

RepWalk

Code and dataset for our paper "Replicate, Walk, and Stop on Syntax: an Effective Neural Network Model for Aspect-Level Sentiment Classification", AAAI2020
Python
25
star
7

AMP-Poster-Slides-LaTeX

LaTeX Poster and Slides for AMP (CVPR 2021)
TeX
17
star
8

ChatNVL-Towards-Visual-Novel-ChatBot

Python
16
star
9

HuaweiCup2021-MCM-ProblemE

2021年华为杯第十八届中国研究生数学建模竞赛E题全国一等奖
Python
16
star
10

bilibili-parse

bilibili视频html5直播&下载&API(待修复)
PHP
15
star
11

Image-Segmentation-PyTorch

U-Net for image segmentation, PyTorch implementation.
Python
13
star
12

cryptography-experiment

BUAA CST Spring 2019 Cryptography Experiment
Python
9
star
13

buaa-counselor-order

辅导员预约微信小程序
JavaScript
7
star
14

BiLSTM-CRF-PyTorch-demo

A simple baseline model for Named Entity Recognition
Python
7
star
15

SAGAN-PyTorch

A PyTorch implementation for Self-Attention Generative Adversarial Networks
Python
5
star
16

hiyouga-blog-project

填坑ing...
TypeScript
5
star
17

Visual-Novel-Music

视觉小说音乐库(跑路ing)
PHP
5
star
18

LLaMA-QQ-Chatbot

A QQ chatbot using OpenAI API
JavaScript
4
star
19

Musicbox-for-web

论坛中使用的简易音乐播放器
PHP
3
star
20

Toxic_Detection

BUAA SCSE Autumn 2021 Machine Learning Group Homework
Python
3
star
21

Java-Network-Capturer

BUAA CST Autumn 2018 Java Programming Course Design
Java
3
star
22

database-experiment

BUAA CST Autumn 2019 Database Experiment
JavaScript
2
star
23

Cuisine_Prediction

BUAA SCSE Autumn 2021 Machine Learning Personal Homework
Python
2
star
24

digiC-experiment

BUAA CST Autumn 2018 Digital Circuit Experiment
Verilog
1
star
25

Survey-readme-template

How to write a pretty readme for your survey.
1
star
26

PY-Learning

学习代码
Python
1
star
27

information-theory-experiment

BUAA CST Spring 2019 Information Theory Experiment
Python
1
star
28

hiyouga

1
star
29

yukidou-wechat

祐希堂汉化组公众号接口
PHP
1
star
30

getchu-proxy

论坛中使用的Getchu游戏信息抓取
PHP
1
star
31

Papercode-readme-template

How to write a pretty readme for your paper's code.
1
star