• Stars
    star
    592
  • Rank 73,168 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created 9 months ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.

OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models

arXiv GitHub Stars🔥🔥🔥

omniquant

OmniQuant is a simple and powerful quantization technique for LLMs. The current release supports:

  • OmniQuant algorithm for accurate weight-only quantization (W4A16/W3A16/W2A16) and weight-activation quantization (W6A6, W4A4)
  • Pre-trained Omniquant model zoo for LLMs (LLaMA-1&2, LLaMA-2-Chat, OPT, Falcon; load to generate quantized weights).
  • A out-of-the-box case that leverages MLC-LLM to run LLaMa-2-Chat (7B/13B) with W3A16g128 quantization on GPUs and mobile phones.

News

  • [2023/09] 🔥 We have expanded support for Falcon. OmniQuant efficiently compresses Falcon-180b from 335G to 65G, with minimal performance loss. Furthermore, this compression allows for Falcon-180b inference on a single A100 80GB GPU. For details, refer to runing_falcon180b_on_single_a100_80g. falcon-180b

Contents

Install

conda create -n omniquant python=3.10 -y
conda activate omniquant
git clone https://github.com/OpenGVLab/OmniQuant.git
cd OmniQuant
pip install --upgrade pip 
pip install -e .

We also leverage the kernel from AutoGPTQ to achieve real quantization. So you should also install the bug-fixed AutoGPTQ as follows:

git clone https://github.com/ChenMnZ/AutoGPTQ-bugfix
pip install -v .

OmniQuant Model Zoo

We provide pre-trained Omniquant model zoo for multiple model families, including LLaMa-1&2, LLaMa-2-Chat, OPT.

You can download the pre-trained OmniQuant parameters you need at Huggingface.

The detailed support list:

Models Sizes W2A16 W2A16g128 W2A16g64 W3A16
LLaMA 7B/13B/30B/65B
LLaMA-2 7B/13B/70B
OPT 125m/1.3B/2.7B/6.7B/13B/30B/66B
Models Sizes W3A16g128 W4A16 W4A16g128 W6A6 W4A4
LLaMA 7B/13B/30B/65B
LLaMA-2 7B/13B/70B
OPT 125m/1.3B/2.7B/6.7B/13B/30B/66B
LLaMA-2-Chat 7B/13B

Usage

We provide full script to run OmniQuant in ./scripts/. We use LLaMa-7B as an example here:

  1. Obtain the channel-wise scales and shifts required for initialization:
conda install git git-lfs
git lfs install
git clone https://huggingface.co/ChenMnZ/act_shifts
git clone https://huggingface.co/ChenMnZ/act_scales

Optional, we also offer the script that you can generate channel-wise scales and shifts by yourself:

python generate_act_scale_shift.py --model /PATH/TO/LLaMA/llama-7b
  1. Weight-only quantization
# W3A16
CUDA_VISIBLE_DEVICES=0 python main.py \
--model /PATH/TO/LLaMA/llama-7b  \
--epochs 20 --output_dir ./log/llama-7b-w3a16 \
--eval_ppl --wbits 3 --abits 16 --lwc

# W3A16g128
CUDA_VISIBLE_DEVICES=0 python main.py \
--model /PATH/TO/LLaMA/llama-7b  \
--epochs 20 --output_dir ./log/llama-7b-w3a16g128 \
--eval_ppl --wbits 3 --abits 16 --group_size 128 --lwc
  1. weight-activation quantization
# W4A4
CUDA_VISIBLE_DEVICES=0 python main.py \
--model /PATH/TO/LLaMA/llama-7b  \
--epochs 20 --output_dir ./log/llama-7b-w4a4 \
--eval_ppl --wbits 4 --abits 4 --lwc --let \
--tasks piqa,arc_easy,arc_challenge,boolq,hellaswag,winogrande
  1. reproduce evaluation results of our paper

    1) download the pretrained OmniQuant parameters you want through Huggingface.

    2) set epoch as 0 and inference with resume, take LLaMa-7B with W3A16g128 quantization as an example:

CUDA_VISIBLE_DEVICES=0 python main.py \
--model /PATH/TO/LLaMA/llama-7b  \
--epochs 0 --output_dir ./log/test \
--eval_ppl --wbits 3 --abits 16 --group_size 128 --lwc \
--resume /PATH/TO/Pretrained/Parameters 

More detailed and optional arguments:

  • --model: the local model path or huggingface format.
  • --wbits: weight quantization bits.
  • --abits: activation quantization bits.
  • --group_size: group size of weight quantization. If no set, use per-channel quantization for weight as default.
  • --lwc: activate the Learnable Weight Clipping (LWC).
  • --let: activate the Learnable Equivalent Transformation (LET).
  • --lwc_lr: learning rate of LWC parameters, 1e-2 as default.
  • --let_lr: learning rate of LET parameters, 5e-3 as default.
  • --epochs: training epochs. You can set it as 0 to evaluate pre-trained OmniQuant checkpoints.
  • --nsamples: number of calibration samples, 128 as default.
  • --eval_ppl: evaluating the perplexity of quantized models.
  • --tasks: evaluating zero-shot tasks.
  • --resume: loading pre-trained OmniQuant parameters.
  • --multigpu: to inference larger network on multiple GPUs
  • --real_quant: real quantization, which can see memory reduce
  • --save_dir: saving the quantization model for further exploration.

Runing Quantized Models with MLC-LLM

MLC-LLM offers a universal deployment solution suitable for various language models across a wide range of hardware backends, encompassing iPhones, Android phones, and GPUs from NVIDIA, AMD, and Intel.

We compile the OmniQuant's quantization models through MLC-LLM and offer an out-of-the-box case here. You can see smaller gpu memory usage and inference speedup. Detailed instructions can be found in in runing_quantized_models_with_mlc_llm.ipynb.

Specially, we also deploy the aforementioned two quantized models into mobile phones through MLC-LLM. You can download the Android app by simply clicking the button below:

This app includes three models, LLaMa-2-7B-Chat-Omniquant-W3A16g128asym, LLaMa-2-13B-Chat-Omniquant-W3A16g128asym, and LLaMa-2-13B-Chat-Omniquant-W2A16g128asym. They require at least 4.5G, 7.5G, and 6.0G free RAM, respectively. Note that 2bit quantization has worse performance compared to 3bit quantization as shown in our paper. The inclusion of 2-bit quantization is just an extreme exploration about deploy LLM in mobile phones. Currently, this app is in its demo phase and may experience slower response times, so wait patiently for the generation of response. We have tested this app on Redmi Note 12 Turbo (Snapdragon 7+ Gen 2 and 16G RAM), some examples are provided below:

  • LLaMa-2-7B-Chat-Omniquant-W3A16g128asym
  • LLaMa-2-13B-Chat-Omniquant-W3A16g128asym
  • LLaMa-2-13B-Chat-Omniquant-W2A16g128asym

We also have tested this app on iPhone 14 Pro (A16 Bionic and 6G RAM), some examples are provided below:

  • LLaMa-2-7B-Chat-Omniquant-W3A16g128asym

Results

  • OmniQuant achieve SoTA performance in weight-only quantization weight_only
  • OmniQuant achieve SoTA performance in weight-activation quantization weight_activation
  • OmniQuant is generalize, also obatins excellent performance in instruction-tuned models with GPT-4 evaluation gpt_4_evaluation
  • MLC-LLM can obtain really speedup and memory saving for W4A16/W3A16/W2A16 quantization mlc_llm

Related Project

SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models

AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration

GPTQ: Accurate Post-training Compression for Generative Pretrained Transformers

RPTQ: Reorder-Based Post-Training Quantization for Large Language Models

MLC LLM

AutoGPTQ

Citation

If you use our OmniQuant approach in your research, please cite our paper:

@article{OmniQuant,
  title={OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models},
  author={Shao, Wenqi and Chen,Mengzhao and  Zhang, Zhaoyang and Xu, Peng and Zhao, Lirui and Li, Zhiqian and Zhang, Kaipeng Zhang, and Gao, Peng, and Qiao, Yu, and Luo, Ping},
  journal={arXiv preprint arXiv:2308.13137},
  year={2023}
}

More Repositories

1

LLaMA-Adapter

[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
Python
5,526
star
2

DragGAN

Unofficial Implementation of DragGAN - "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold" (DragGAN 全功能实现,在线Demo,本地部署试用,代码、模型已全部开源,支持Windows, macOS, Linux)
Python
4,967
star
3

InternGPT

InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
Python
3,123
star
4

Ask-Anything

[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
Python
2,695
star
5

InternImage

[CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions
Python
2,315
star
6

InternVideo

Video Foundation Models & Data for Multimodal Understanding
Python
954
star
7

InternVL

[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4V
Python
936
star
8

VisionLLM

VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks
554
star
9

VideoMamba

VideoMamba: State Space Model for Efficient Video Understanding
Python
506
star
10

GITM

Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory
445
star
11

VideoMAEv2

[CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
Python
425
star
12

all-seeing

[ICLR 2024] This is the official implementation of the paper "The All-Seeing Project: Towards Panoptic Visual Recognition and Understanding of the Open World"
Python
388
star
13

Multi-Modality-Arena

Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!
Python
386
star
14

CaFo

[CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners
Python
323
star
15

PonderV2

PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm
Python
302
star
16

DCNv4

[CVPR 2024] Deformable Convolution v4
Python
269
star
17

LAMM

[NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents
Python
267
star
18

UniFormerV2

[ICCV2023] UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer
Python
260
star
19

Instruct2Act

Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model
Python
223
star
20

unmasked_teacher

[ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models
Python
220
star
21

Vision-RWKV

Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures
Python
216
star
22

HumanBench

This repo is official implementation of HumanBench (CVPR2023)
Python
207
star
23

gv-benchmark

General Vision Benchmark, GV-B, a project from OpenGVLab
Python
187
star
24

InternVideo2

152
star
25

ControlLLM

ControlLLM: Augment Language Models with Tools by Searching on Graphs
Python
148
star
26

UniHCP

Official PyTorch implementation of UniHCP
Python
137
star
27

efficient-video-recognition

Python
114
star
28

SAM-Med2D

Official implementation of SAM-Med2D
Jupyter Notebook
114
star
29

ego4d-eccv2022-solutions

Champion Solutions for Ego4D Chanllenge of ECCV 2022
Jupyter Notebook
77
star
30

Awesome-DragGAN

Awesome-DragGAN: A curated list of papers, tutorials, repositories related to DragGAN
75
star
31

DiffRate

[ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging techniques, while incorporating a differentiable compression rate.
Jupyter Notebook
72
star
32

STM-Evaluation

Python
69
star
33

M3I-Pretraining

69
star
34

ChartAst

ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.
Python
60
star
35

DDPS

Official Implementation of "Denoising Diffusion Semantic Segmentation with Mask Prior Modeling"
Python
53
star
36

MUTR

[AAAI 2024] Referred by Multi-Modality: A Unified Temporal Transformers for Video Object Segmentation
Python
52
star
37

Awesome-LLM4Tool

A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools
52
star
38

LORIS

Long-Term Rhythmic Video Soundtracker, ICML2023
Python
47
star
39

Siamese-Image-Modeling

[CVPR 2023]Implementation of Siamese Image Modeling for Self-Supervised Vision Representation Learning
Python
32
star
40

Multitask-Model-Selector

Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector
Python
27
star
41

InternVL-MMDetSeg

Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed
Jupyter Notebook
22
star
42

Official-ConvMAE-Det

Python
13
star
43

opengvlab.github.io

12
star
44

MovieMind

9
star
45

perception_test_iccv2023

Champion Solutions repository for Perception Test challenges in ICCV2023 workshop.
Python
9
star
46

EmbodiedGPT

5
star
47

DriveMLM

3
star
48

.github

2
star