• Stars
    star
    620
  • Rank 70,643 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created 9 months ago
  • Updated 12 days ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A pytorch quantization backend for optimum

Quanto

DISCLAIMER: This package is still beta. Expect breaking changes in API and serialization.

🤗 Quanto is a python quantization toolkit that provides several features that are either not supported or limited by the base pytorch quantization tools:

  • all features are available in eager mode (works with non-traceable models),
  • quantized models can be placed on any device (including CUDA and MPS),
  • automatically inserts quantization and dequantization stubs,
  • automatically inserts quantized functional operations,
  • automatically inserts quantized modules (see below the list of supported modules),
  • provides a seamless workflow from a float model to a dynamic to a static quantized model,
  • serialization compatible with pytorch weight_only and 🤗 safetensors,
  • uses integer matrix multiplications (mm) on CUDA devices,
  • supports int2, int4, int8 and float8 weights,
  • supports int8 and float8 activations.

Features yet to be implemented:

  • dynamic activations smoothing,
  • kernels for all mixed matrix multiplications on all devices,
  • compatibility with torch compiler (aka dynamo).

Quantized modules

Thanks to a seamless propagation mechanism through quantized tensors, only a few modules working as quantized tensors insertion points are actually required.

The following modules can be quantized:

  • Linear (QLinear). Weights are always quantized, and biases are not quantized. Inputs and outputs can be quantized.
  • Conv2d (QConv2D). Weights are always quantized, and biases are not quantized. Inputs and outputs can be quantized.
  • LayerNorm, Weights and biases are not quantized. Outputs can be quantized.

Limitations and design choices

Tensors

At the heart of quanto is a Tensor subclass that corresponds to:

  • the projection of a source Tensor into the optimal range for a given destination type,
  • the mapping of projected values to the destination type.

For floating-point destination types, the mapping is done by the native pytorch cast (i.e. Tensor.to()).

For integer destination types, the mapping is a simple rounding operation (i.e. torch.round()).

The goal of the projection is to increase the accuracy of the conversion by minimizing the number of:

  • saturated values (i.e. mapped to the destination type min/max),
  • zeroed values (because they are below the smallest number that can be represented by the destination type)

The projection is symmetric (affine), i.e. it does not use a zero-point. This makes quantized Tensors compatible with many operations.

One of the benefits of using a lower-bitwidth representation is that you will be able to take advantage of accelerated operations for the destination type, which is typically faster than their higher precision equivalents.

The current implementation however falls back to float32 operations for a lot of operations because of a lack of dedicated kernels (only int8 matrix multiplication is available).

Note: integer operations cannot be performed in float16 as a fallback because this format is very bad at representing integer and will likely lead to overflows in intermediate calculations.

Quanto does not support the conversion of a Tensor using mixed destination types.

Modules

Quanto provides a generic mechanism to replace torch modules by quanto modules that are able to process quanto tensors.

Quanto modules dynamically convert their weights until a model is frozen, which slows down inference a bit but is required if the model needs to be tuned.

Biases are not converted because to preserve the accuracy of a typical addmm operation, they must be converted with a scale that is equal to the product of the input and weight scales, which leads to a ridiculously small scale, and conversely requires a very high bitwidth to avoid clipping. Typically, with int8 inputs and weights, biases would need to be quantized with at least 12 bits, i.e. in int16. Since most biases are today float16, this is a waste of time.

Activations are dynamically quantized using static scales (defaults to the range [-1, 1]). The model needs to be calibrated to evaluate the best activation scales (using a momentum).

Performances

In a nutshell:

  • accuracy: models compiled with int8/float8 weights and float8 activations are very close to the 16-bit models,
  • latency: all models are at least 2x slower than the 16-bit models due to the lack of optimized kernels (for now).
  • device memory: approximately divided by float bits / integer bits.

The paragraph below is just an example. Please refer to the bench folder for detailed results per use-case of model.

NousResearch/Llama-2-7b-hf

NousResearch/Llama-2-7b-hf WikiText perplexity

Installation

Quanto is available as a pip package.

pip install quanto

Quantization workflow

Quanto does not make a clear distinction between dynamic and static quantization: models are always dynamically quantized, but their weights can later be "frozen" to integer values.

A typical quantization workflow would consist of the following steps:

1. Quantize

The first step converts a standard float model into a dynamically quantized model.

quantize(model, weights=quanto.qint8, activations=quanto.qint8)

At this stage, only the inference of the model is modified to dynamically quantize the weights.

2. Calibrate (optional if activations are not quantized)

Quanto supports a calibration mode that allows to record the activation ranges while passing representative samples through the quantized model.

with calibration(momentum=0.9):
    model(samples)

This automatically activates the quantization of the activations in the quantized modules.

3. Tune, aka Quantization-Aware-Training (optional)

If the performance of the model degrades too much, one can tune it for a few epochs to recover the float model performance.

model.train()
for batch_idx, (data, target) in enumerate(train_loader):
    data, target = data.to(device), target.to(device)
    optimizer.zero_grad()
    output = model(data).dequantize()
    loss = torch.nn.functional.nll_loss(output, target)
    loss.backward()
    optimizer.step()

4. Freeze integer weights

When freezing a model, its float weights are replaced by quantized integer weights.

freeze(model)

Please refer to the examples for instantiations of that workflow.

Per-axis versus per-tensor

Activations are always quantized per-tensor because most linear algebra operations in a model graph are not compatible with per-axis inputs: you simply cannot add numbers that are not expressed in the same base (you cannot add apples and oranges).

Weights involved in matrix multiplications are, on the contrary, always quantized along their first axis, because all output features are evaluated independently from one another.

The outputs of a quantized matrix multiplication will anyway always be dequantized, even if activations are quantized, because:

  • the resulting integer values are expressed with a much higher bitwidth (typically int32) than the activation bitwidth (typically int8),
  • they might be combined with a float bias.

Quantizing activations per-tensor to int8 can lead to serious quantization errors if the corresponding tensors contain large outlier values. Typically, this will lead to quantized tensors with most values set to zero (except the outliers).

A possible solution to work around that issue is to 'smooth' the activations statically as illustrated by SmoothQuant. You can find a script to smooth some model architectures under external/smoothquant.

A better option is to represent activations using float8.

More Repositories

1

transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Python
126,555
star
2

pytorch-image-models

PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
Python
28,073
star
3

diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Python
23,394
star
4

datasets

🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
Python
17,530
star
5

peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Python
14,585
star
6

candle

Minimalist ML framework for Rust
Rust
14,110
star
7

tokenizers

💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
Rust
8,560
star
8

trl

Train transformer language models with reinforcement learning.
Python
8,483
star
9

text-generation-inference

Large Language Model Text Generation Inference
Python
8,197
star
10

accelerate

🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Python
7,121
star
11

chat-ui

Open source codebase powering the HuggingChat app
TypeScript
6,584
star
12

alignment-handbook

Robust recipes to align language models with human and AI preferences
Python
4,118
star
13

lerobot

🤗 LeRobot: End-to-end Learning for Real-World Robotics in Pytorch
Python
4,018
star
14

deep-rl-class

This repo contains the syllabus of the Hugging Face Deep Reinforcement Learning Course.
MDX
3,680
star
15

notebooks

Notebooks using the Hugging Face libraries 🤗
Jupyter Notebook
3,329
star
16

distil-whisper

Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.
Python
3,286
star
17

autotrain-advanced

🤗 AutoTrain Advanced
Python
3,283
star
18

diffusion-models-class

Materials for the Hugging Face Diffusion Models Course
Jupyter Notebook
3,280
star
19

neuralcoref

✨Fast Coreference Resolution in spaCy with Neural Networks
C
2,819
star
20

parler-tts

Inference and training library for high-quality TTS models.
Python
2,735
star
21

knockknock

🚪✊Knock Knock: Get notified when your training ends with only two additional lines of code
Python
2,682
star
22

safetensors

Simple, safe way to store and distribute tensors
Python
2,538
star
23

swift-coreml-diffusers

Swift app demonstrating Core ML Stable Diffusion
Swift
2,406
star
24

text-embeddings-inference

A blazing fast inference solution for text embeddings models
Rust
2,148
star
25

optimum

🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
Python
2,086
star
26

setfit

Efficient few-shot learning with Sentence Transformers
Jupyter Notebook
2,047
star
27

course

The Hugging Face course on Transformers
MDX
2,005
star
28

awesome-papers

Papers & presentation materials from Hugging Face's internal science day
1,996
star
29

blog

Public repo for HF blog posts
Jupyter Notebook
1,962
star
30

evaluate

🤗 Evaluate: A library for easily evaluating machine learning models and datasets.
Python
1,825
star
31

datatrove

Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
Python
1,657
star
32

transfer-learning-conv-ai

🦄 State-of-the-Art Conversational AI with Transfer Learning
Python
1,654
star
33

swift-coreml-transformers

Swift Core ML 3 implementations of GPT-2, DistilGPT-2, BERT, and DistilBERT for Question answering. Other Transformers coming soon!
Swift
1,543
star
34

pytorch-openai-transformer-lm

🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI
Python
1,464
star
35

cookbook

Open-source AI cookbook
Jupyter Notebook
1,416
star
36

huggingface_hub

All the open source things related to the Hugging Face Hub.
Python
1,311
star
37

Mongoku

🔥The Web-scale GUI for MongoDB
TypeScript
1,289
star
38

huggingface.js

Utilities to use the Hugging Face Hub API
TypeScript
1,251
star
39

gsplat.js

JavaScript Gaussian Splatting library.
TypeScript
1,233
star
40

hmtl

🌊HMTL: Hierarchical Multi-Task Learning - A State-of-the-Art neural network model for several NLP tasks based on PyTorch and AllenNLP
Python
1,185
star
41

llm-vscode

LLM powered development for VSCode
TypeScript
1,148
star
42

pytorch-pretrained-BigGAN

🦋A PyTorch implementation of BigGAN with pretrained weights and conversion scripts.
Python
986
star
43

nanotron

Minimalistic large language model 3D-parallelism training
Python
897
star
44

torchMoji

😇A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc
Python
880
star
45

optimum-nvidia

Python
825
star
46

awesome-huggingface

🤗 A list of wonderful open-source projects & applications integrated with Hugging Face libraries.
821
star
47

naacl_transfer_learning_tutorial

Repository of code for the tutorial on Transfer Learning in NLP held at NAACL 2019 in Minneapolis, MN, USA
Python
718
star
48

dataset-viewer

Lightweight web API for visualizing and exploring any dataset - computer vision, speech, text, and tabular - stored on the Hugging Face Hub
Python
630
star
49

llm.nvim

LLM powered development for Neovim
Lua
590
star
50

exporters

Export Hugging Face models to Core ML and TensorFlow Lite
Python
559
star
51

transformers-bloom-inference

Fast Inference Solutions for BLOOM
Python
551
star
52

pytorch_block_sparse

Fast Block Sparse Matrices for Pytorch
C++
523
star
53

swift-transformers

Swift Package to implement a transformers-like API in Swift
Swift
504
star
54

llm-ls

LSP server leveraging LLMs for code completion (and more?)
Rust
498
star
55

node-question-answering

Fast and production-ready question answering in Node.js
TypeScript
459
star
56

lighteval

LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron.
Python
442
star
57

large_language_model_training_playbook

An open collection of implementation tips, tricks and resources for training large language models
Python
441
star
58

llm_training_handbook

An open collection of methodologies to help with successful training of large language models.
Python
416
star
59

swift-chat

Mac app to demonstrate swift-transformers
Swift
392
star
60

ratchet

A cross-platform browser ML framework.
Rust
390
star
61

tflite-android-transformers

DistilBERT / GPT-2 for on-device inference thanks to TensorFlow Lite with Android demo apps
Java
368
star
62

community-events

Place where folks can contribute to 🤗 community events
Jupyter Notebook
368
star
63

text-clustering

Easily embed, cluster and semantically label text datasets
Python
367
star
64

nn_pruning

Prune a model while finetuning or training.
Jupyter Notebook
360
star
65

optimum-intel

🤗 Optimum Intel: Accelerate inference with Intel optimization tools
Jupyter Notebook
341
star
66

speechbox

Python
339
star
67

controlnet_aux

Python
326
star
68

100-times-faster-nlp

🚀100 Times Faster Natural Language Processing in Python - iPython notebook
HTML
325
star
69

education-toolkit

Educational materials for universities
Jupyter Notebook
320
star
70

unity-api

C#
302
star
71

datablations

Scaling Data-Constrained Language Models
Jupyter Notebook
296
star
72

open-muse

Open reproduction of MUSE for fast text2image generation.
Python
293
star
73

cosmopedia

Python
285
star
74

audio-transformers-course

The Hugging Face Course on Transformers for Audio
MDX
279
star
75

hf_transfer

Rust
242
star
76

hub-docs

Docs of the Hugging Face Hub
221
star
77

dataspeech

Python
207
star
78

optimum-benchmark

A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
Python
206
star
79

diarizers

Python
206
star
80

simulate

🎢 Creating and sharing simulation environments for embodied and synthetic data research
Python
185
star
81

instruction-tuned-sd

Code for instruction-tuning Stable Diffusion.
Python
181
star
82

llm-swarm

Manage scalable open LLM inference endpoints in Slurm clusters
Python
176
star
83

optimum-neuron

Easy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.
Jupyter Notebook
173
star
84

olm-datasets

Pipeline for pulling and processing online language model pretraining data from the web
Python
169
star
85

data-is-better-together

Let's build better datasets, together!
Jupyter Notebook
162
star
86

OBELICS

Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M documents, 115B text tokens and 353M images.
Python
159
star
87

diffusion-fast

Faster generation with text-to-image diffusion models.
Python
157
star
88

workshops

Materials for workshops on the Hugging Face ecosystem
Jupyter Notebook
146
star
89

api-inference-community

Python
145
star
90

jat

Distributed online training of a general multi-task Deep RL Agent
Python
136
star
91

chug

Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.
Python
136
star
92

sharp-transformers

A Unity plugin for using Transformers models in Unity.
C#
129
star
93

optimum-habana

Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
Python
114
star
94

hf-hub

Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package
Rust
109
star
95

competitions

Python
104
star
96

frp

FRP Fork
Go
102
star
97

olm-training

Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.
Python
92
star
98

fuego

[WIP] A 🔥 interface for running code in the cloud
Python
85
star
99

tune

Python
83
star
100

datasets-viewer

Viewer for the 🤗 datasets library.
Python
83
star