• Stars
    star
    114
  • Rank 302,306 (Top 7 %)
  • Language
    Jupyter Notebook
  • Created over 1 year ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Train your own PetGPT at home!

PetGPT

Thanks to LLaMA and Alpaca-LoRA, you can train your own PetGPT at home. This GitHub repository offers a step by step guide in its README.md file.

LLaMA (Large Language Model Meta AI)

This repository assumes that you have already found a way to download the checkpoints and tokenizer for LLaMA (e.g., by filling out this google form). You should create a subdirectory (named LLaMA) within this repository having a structure similar to the following tree.

LLaMA
β”œβ”€β”€ 13B
β”‚   β”œβ”€β”€ checklist.chk
β”‚   β”œβ”€β”€ consolidated.00.pth
β”‚   β”œβ”€β”€ consolidated.01.pth
β”‚   └── params.json
β”œβ”€β”€ 30B
β”‚   β”œβ”€β”€ checklist.chk
β”‚   β”œβ”€β”€ consolidated.00.pth
β”‚   β”œβ”€β”€ consolidated.01.pth
β”‚   β”œβ”€β”€ consolidated.02.pth
β”‚   β”œβ”€β”€ consolidated.03.pth
β”‚   └── params.json
β”œβ”€β”€ 65B
β”‚   β”œβ”€β”€ checklist.chk
β”‚   β”œβ”€β”€ consolidated.00.pth
β”‚   β”œβ”€β”€ consolidated.01.pth
β”‚   β”œβ”€β”€ consolidated.02.pth
β”‚   β”œβ”€β”€ consolidated.03.pth
β”‚   β”œβ”€β”€ consolidated.04.pth
β”‚   β”œβ”€β”€ consolidated.05.pth
β”‚   β”œβ”€β”€ consolidated.06.pth
β”‚   β”œβ”€β”€ consolidated.07.pth
β”‚   └── params.json
β”œβ”€β”€ 7B
β”‚   β”œβ”€β”€ checklist.chk
β”‚   β”œβ”€β”€ consolidated.00.pth
β”‚   └── params.json
β”œβ”€β”€ llama.sh
β”œβ”€β”€ tokenizer_checklist.chk
└── tokenizer.model

4 directories, 26 files

Clone the LLaMA repository to make sure that everything works as expected.

git clone https://github.com/facebookresearch/llama.git

The LLaMA repository is already included here for reproducibility purposes in the folder named llama. You can now run the following commands.

cd llama

torchrun --nproc_per_node 1 example.py --ckpt_dir ../LLaMA/7B --tokenizer_path ../LLaMA/tokenizer.model

Converting LLaMA to Hugging Face

Create an empty directory within this repository called LLaMA_HF. The following two scripts will then help you convert the LLaMA checkpoints and tokenizer to the Hugging Face format.

convert_llama_tokenizer_to_hf.ipynb
convert_llama_weights_to_hf.ipynb

These two scripts are simplied versions of convert_llama_weights_to_hf.py for pedagogical purposes.

This should result in a subdirectory (named LLaMA_HF) within this repository having a structure similar to the following tree.

LaMA_HF
β”œβ”€β”€ config.json
β”œβ”€β”€ generation_config.json
β”œβ”€β”€ pytorch_model-00001-of-00002.bin
β”œβ”€β”€ pytorch_model-00002-of-00002.bin
β”œβ”€β”€ pytorch_model.bin.index.json
β”œβ”€β”€ special_tokens_map.json
β”œβ”€β”€ tokenizer_config.json
└── tokenizer.model

Exploratory Data Analysis

Clone the Alpaca-LoRA repository.

git clone https://github.com/tloen/alpaca-lora.git

The Alpaca-LoRA repository is already included here for reproducibility purposes in the folder named alpaca-lora. Within this folder, there is a file called alpaca_data_cleaned.json. This file contains a cleaned and curated version of the dataset used to train the original Alpaca. The following script will help you explore this dataset and build some intuition.

exploratory_data_analysis.ipynb

Fine Tuning

You can now use the following commands to finetune the LLaMA model on the alpaca_data_cleaned.json dataset.

cd alpaca-lora

mkdir output

WORLD_SIZE=4 CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc_per_node=4 --master_port=1234 finetune.py --base_model ../LLaMA_HF --data_path alpaca_data_cleaned.json --output_dir output

The command given above works for a machine with 4 GPUs. You should adjust it according to your own setup.

Once the finetuning process is finished, it will write the model artifacts in a folder called output.

alpaca-lora/output/
β”œβ”€β”€ adapter_config.json
β”œβ”€β”€ adapter_model.bin
β”œβ”€β”€ checkpoint-1000
β”‚   β”œβ”€β”€ optimizer.pt
β”‚   β”œβ”€β”€ pytorch_model.bin
β”‚   β”œβ”€β”€ rng_state_0.pth
β”‚   β”œβ”€β”€ rng_state_1.pth
β”‚   β”œβ”€β”€ rng_state_2.pth
β”‚   β”œβ”€β”€ rng_state_3.pth
β”‚   β”œβ”€β”€ scaler.pt
β”‚   β”œβ”€β”€ scheduler.pt
β”‚   β”œβ”€β”€ trainer_state.json
β”‚   └── training_args.bin
β”œβ”€β”€ checkpoint-600
β”‚   β”œβ”€β”€ optimizer.pt
β”‚   β”œβ”€β”€ pytorch_model.bin
β”‚   β”œβ”€β”€ rng_state_0.pth
β”‚   β”œβ”€β”€ rng_state_1.pth
β”‚   β”œβ”€β”€ rng_state_2.pth
β”‚   β”œβ”€β”€ rng_state_3.pth
β”‚   β”œβ”€β”€ scaler.pt
β”‚   β”œβ”€β”€ scheduler.pt
β”‚   β”œβ”€β”€ trainer_state.json
β”‚   └── training_args.bin
└── checkpoint-800
    β”œβ”€β”€ optimizer.pt
    β”œβ”€β”€ pytorch_model.bin
    β”œβ”€β”€ rng_state_0.pth
    β”œβ”€β”€ rng_state_1.pth
    β”œβ”€β”€ rng_state_2.pth
    β”œβ”€β”€ rng_state_3.pth
    β”œβ”€β”€ scaler.pt
    β”œβ”€β”€ scheduler.pt
    β”œβ”€β”€ trainer_state.json
    └── training_args.bin

3 directories, 32 files

Generating Text

Here is how you can interact with the finetuned model.

python generate.py --load_8bit --base_model ../LLaMA_HF --lora_weights output --share_gradio False

The final screen hosted on http://0.0.0.0:7860 will look like the following image.

More Repositories

1

PINNs

Physics Informed Deep Learning: Data-driven Solutions and Discovery of Nonlinear Partial Differential Equations
Python
3,268
star
2

Applied-Deep-Learning

Applied Deep Learning Course
2,959
star
3

HFM

Hidden Fluid Mechanics
Python
271
star
4

DeepHPMs

Deep Hidden Physics Models: Deep Learning of Nonlinear Partial Differential Equations
Python
256
star
5

FBSNNs

Forward-Backward Stochastic Neural Networks: Deep Learning of High-dimensional Partial Differential Equations
Python
136
star
6

HPM

Hidden physics models: Machine learning of nonlinear partial differential equations
MATLAB
134
star
7

DeepVIV

Deep Learning of Vortex Induced Vibrations
Python
79
star
8

NumericalGP

Numerical Gaussian Processes for Time-dependent and Non-linear Partial Differential Equations
MATLAB
64
star
9

MultistepNNs

Multistep Neural Networks for Data-driven Discovery of Nonlinear Dynamical Systems
Python
58
star
10

TutorialGP

Tutorial on Gaussian Processes
MATLAB
57
star
11

ParametricGP

Parametric Gaussian Process Regression for Big Data
Python
45
star
12

backprop

Backpropagation in Python, C++, and Cuda
C++
42
star
13

DeepLearningTutorial

Tutorial on a number of topics in Deep Learning
Python
33
star
14

ParametricGP-in-Matlab

Parametric Gaussian Process Regression for Big Data (Matlab Version)
MATLAB
25
star
15

PDE_GP

Machine learning of linear differential equations using Gaussian processes
MATLAB
24
star
16

Introduction-to-Machine-Learning-in-R

Introduction to Machine Learning in R
Jupyter Notebook
20
star
17

DeepTurbulence

Deep Learning of Turbulent Scalar Mixing
Mathematica
15
star
18

APPM_Colloquium

Applied Mathematics (APPM) Department Colloquium
8
star