• Stars
    star
    201
  • Rank 188,343 (Top 4 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created 8 months ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A pytorch Quantization Toolkit

Quanto

DISCLAIMER: This package is still beta. Expect breaking changes in API and serialization.

๐Ÿค— Quanto is a python quantization toolkit that provides several features that are either not supported or limited by the base pytorch quantization tools:

  • all features are available in eager mode (works with non-traceable models),
  • quantized models can be placed on any device (including CUDA and MPS),
  • automatically inserts quantization and dequantization stubs,
  • automatically inserts quantized functional operations,
  • automatically inserts quantized modules (see below the list of supported modules),
  • provides a seamless workflow from a float model to a dynamic to a static quantized model,
  • serialization compatible with pytorch weight_only and ๐Ÿค— safetensors,
  • uses integer matrix multiplications (mm) on CUDA devices,
  • supports int2, int4, int8 and float8 weights,
  • supports int8 and float8 activations.

Features yet to be implemented:

  • dynamic activations smoothing,
  • kernels for all mixed matrix multiplications on all devices,
  • compatibility with torch compiler (aka dynamo).

Quantized modules

Thanks to a seamless propagation mechanism through quantized tensors, only a few modules working as quantized tensors insertion points are actually required.

The following modules can be quantized:

  • Linear (QLinear). Weights are always quantized, and biases are not quantized. Inputs and outputs can be quantized.
  • Conv2d (QConv2D). Weights are always quantized, and biases are not quantized. Inputs and outputs can be quantized.
  • LayerNorm, Weights and biases are not quantized. Outputs can be quantized.

Limitations and design choices

Tensors

At the heart of quanto is a Tensor subclass that corresponds to:

  • the projection of a source Tensor into the optimal range for a given destination type,
  • the mapping of projected values to the destination type.

For floating-point destination types, the mapping is done by the native pytorch cast (i.e. Tensor.to()).

For integer destination types, the mapping is a simple rounding operation (i.e. torch.round()).

The goal of the projection is to increase the accuracy of the conversion by minimizing the number of:

  • saturated values (i.e. mapped to the destination type min/max),
  • zeroed values (because they are below the smallest number that can be represented by the destination type)

The projection is symmetric (affine), i.e. it does not use a zero-point. This makes quantized Tensors compatible with many operations.

One of the benefits of using a lower-bitwidth representation is that you will be able to take advantage of accelerated operations for the destination type, which is typically faster than their higher precision equivalents.

The current implementation however falls back to float32 operations for a lot of operations because of a lack of dedicated kernels (only int8 matrix multiplication is available).

Note: integer operations cannot be performed in float16 as a fallback because this format is very bad at representing integer and will likely lead to overflows in intermediate calculations.

Quanto does not support the conversion of a Tensor using mixed destination types.

Modules

Quanto provides a generic mechanism to replace torch modules by quanto modules that are able to process quanto tensors.

Quanto modules dynamically convert their weights until a model is frozen, which slows down inference a bit but is required if the model needs to be tuned.

Biases are not converted because to preserve the accuracy of a typical addmm operation, they must be converted with a scale that is equal to the product of the input and weight scales, which leads to a ridiculously small scale, and conversely requires a very high bitwidth to avoid clipping. Typically, with int8 inputs and weights, biases would need to be quantized with at least 12 bits, i.e. in int16. Since most biases are today float16, this is a waste of time.

Activations are dynamically quantized using static scales (defaults to the range [-1, 1]). The model needs to be calibrated to evaluate the best activation scales (using a momentum).

Performances

In a nutshell:

  • accuracy: models compiled with int8/float8 weights and float8 activations are very close to the 16-bit models,
  • latency: all models are at least 2x slower than the 16-bit models due to the lack of optimized kernels (for now).
  • device memory: approximately divided by float bits / integer bits.

The paragraph below is just an example. Please refer to the bench folder for detailed results per use-case of model.

NousResearch/Llama-2-7b-hf

NousResearch/Llama-2-7b-hf WikiText perplexity

Installation

Quanto is available as a pip package.

pip install quanto

Quantization workflow

Quanto does not make a clear distinction between dynamic and static quantization: models are always dynamically quantized, but their weights can later be "frozen" to integer values.

A typical quantization workflow would consist of the following steps:

1. Quantize

The first step converts a standard float model into a dynamically quantized model.

quantize(model, weights=quanto.qint8, activations=quanto.qint8)

At this stage, only the inference of the model is modified to dynamically quantize the weights.

2. Calibrate (optional if activations are not quantized)

Quanto supports a calibration mode that allows to record the activation ranges while passing representative samples through the quantized model.

with calibration(momentum=0.9):
    model(samples)

This automatically activates the quantization of the activations in the quantized modules.

3. Tune, aka Quantization-Aware-Training (optional)

If the performance of the model degrades too much, one can tune it for a few epochs to recover the float model performance.

model.train()
for batch_idx, (data, target) in enumerate(train_loader):
    data, target = data.to(device), target.to(device)
    optimizer.zero_grad()
    output = model(data).dequantize()
    loss = torch.nn.functional.nll_loss(output, target)
    loss.backward()
    optimizer.step()

4. Freeze integer weights

When freezing a model, its float weights are replaced by quantized integer weights.

freeze(model)

Please refer to the examples for instantiations of that workflow.

Per-axis versus per-tensor

Activations are always quantized per-tensor because most linear algebra operations in a model graph are not compatible with per-axis inputs: you simply cannot add numbers that are not expressed in the same base (you cannot add apples and oranges).

Weights involved in matrix multiplications are, on the contrary, always quantized along their first axis, because all output features are evaluated independently from one another.

The outputs of a quantized matrix multiplication will anyway always be dequantized, even if activations are quantized, because:

  • the resulting integer values are expressed with a much higher bitwidth (typically int32) than the activation bitwidth (typically int8),
  • they might be combined with a float bias.

Quantizing activations per-tensor to int8 can lead to serious quantization errors if the corresponding tensors contain large outlier values. Typically, this will lead to quantized tensors with most values set to zero (except the outliers).

A possible solution to work around that issue is to 'smooth' the activations statically as illustrated by SmoothQuant. You can find a script to smooth some model architectures under external/smoothquant.

A better option is to represent activations using float8.

More Repositories

1

transformers

๐Ÿค— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Python
125,891
star
2

pytorch-image-models

PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
Python
28,073
star
3

diffusers

๐Ÿค— Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Python
22,776
star
4

datasets

๐Ÿค— The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
Python
17,530
star
5

peft

๐Ÿค— PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Python
14,007
star
6

candle

Minimalist ML framework for Rust
Rust
12,686
star
7

tokenizers

๐Ÿ’ฅ Fast State-of-the-Art Tokenizers optimized for Research and Production
Rust
8,286
star
8

trl

Train transformer language models with reinforcement learning.
Python
8,181
star
9

text-generation-inference

Large Language Model Text Generation Inference
Python
7,240
star
10

accelerate

๐Ÿš€ A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Python
7,008
star
11

chat-ui

Open source codebase powering the HuggingChat app
TypeScript
6,369
star
12

deep-rl-class

This repo contains the syllabus of the Hugging Face Deep Reinforcement Learning Course.
MDX
3,541
star
13

alignment-handbook

Robust recipes to align language models with human and AI preferences
Python
3,485
star
14

autotrain-advanced

๐Ÿค— AutoTrain Advanced
Python
3,283
star
15

diffusion-models-class

Materials for the Hugging Face Diffusion Models Course
Jupyter Notebook
3,126
star
16

notebooks

Notebooks using the Hugging Face libraries ๐Ÿค—
Jupyter Notebook
3,114
star
17

distil-whisper

Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.
Python
2,964
star
18

neuralcoref

โœจFast Coreference Resolution in spaCy with Neural Networks
C
2,806
star
19

knockknock

๐ŸšชโœŠKnock Knock: Get notified when your training ends with only two additional lines of code
Python
2,682
star
20

swift-coreml-diffusers

Swift app demonstrating Core ML Stable Diffusion
Swift
2,377
star
21

safetensors

Simple, safe way to store and distribute tensors
Python
2,347
star
22

optimum

๐Ÿš€ Accelerate training and inference of ๐Ÿค— Transformers and ๐Ÿค— Diffusers with easy to use hardware optimization tools
Python
2,086
star
23

awesome-papers

Papers & presentation materials from Hugging Face's internal science day
1,996
star
24

blog

Public repo for HF blog posts
Jupyter Notebook
1,962
star
25

setfit

Efficient few-shot learning with Sentence Transformers
Jupyter Notebook
1,912
star
26

text-embeddings-inference

A blazing fast inference solution for text embeddings models
Rust
1,845
star
27

course

The Hugging Face course on Transformers
MDX
1,832
star
28

evaluate

๐Ÿค— Evaluate: A library for easily evaluating machine learning models and datasets.
Python
1,825
star
29

transfer-learning-conv-ai

๐Ÿฆ„ State-of-the-Art Conversational AI with Transfer Learning
Python
1,654
star
30

swift-coreml-transformers

Swift Core ML 3 implementations of GPT-2, DistilGPT-2, BERT, and DistilBERT for Question answering. Other Transformers coming soon!
Swift
1,543
star
31

pytorch-openai-transformer-lm

๐ŸฅA PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI
Python
1,464
star
32

cookbook

Open-source AI cookbook
Jupyter Notebook
1,357
star
33

huggingface_hub

All the open source things related to the Hugging Face Hub.
Python
1,311
star
34

Mongoku

๐Ÿ”ฅThe Web-scale GUI for MongoDB
TypeScript
1,289
star
35

huggingface.js

Utilities to use the Hugging Face Hub API
TypeScript
1,193
star
36

hmtl

๐ŸŒŠHMTL: Hierarchical Multi-Task Learning - A State-of-the-Art neural network model for several NLP tasks based on PyTorch and AllenNLP
Python
1,185
star
37

gsplat.js

JavaScript Gaussian Splatting library.
TypeScript
1,114
star
38

llm-vscode

LLM powered development for VSCode
TypeScript
1,060
star
39

datatrove

Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
Python
1,033
star
40

pytorch-pretrained-BigGAN

๐Ÿฆ‹A PyTorch implementation of BigGAN with pretrained weights and conversion scripts.
Python
986
star
41

torchMoji

๐Ÿ˜‡A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc
Python
880
star
42

nanotron

Minimalistic large language model 3D-parallelism training
Python
810
star
43

naacl_transfer_learning_tutorial

Repository of code for the tutorial on Transfer Learning in NLP held at NAACL 2019 in Minneapolis, MN, USA
Python
718
star
44

awesome-huggingface

๐Ÿค— A list of wonderful open-source projects & applications integrated with Hugging Face libraries.
698
star
45

optimum-nvidia

Python
680
star
46

dataset-viewer

Lightweight web API for visualizing and exploring any dataset - computer vision, speech, text, and tabular - stored on the Hugging Face Hub
Python
614
star
47

transformers-bloom-inference

Fast Inference Solutions for BLOOM
Python
546
star
48

exporters

Export Hugging Face models to Core ML and TensorFlow Lite
Python
540
star
49

pytorch_block_sparse

Fast Block Sparse Matrices for Pytorch
C++
523
star
50

llm.nvim

LLM powered development for Neovim
Lua
507
star
51

swift-transformers

Swift Package to implement a transformers-like API in Swift
Swift
482
star
52

node-question-answering

Fast and production-ready question answering in Node.js
TypeScript
459
star
53

large_language_model_training_playbook

An open collection of implementation tips, tricks and resources for training large language models
Python
431
star
54

llm-ls

LSP server leveraging LLMs for code completion (and more?)
Rust
416
star
55

llm_training_handbook

An open collection of methodologies to help with successful training of large language models.
Python
385
star
56

swift-chat

Mac app to demonstrate swift-transformers
Swift
375
star
57

tflite-android-transformers

DistilBERT / GPT-2 for on-device inference thanks to TensorFlow Lite with Android demo apps
Java
368
star
58

community-events

Place where folks can contribute to ๐Ÿค— community events
Jupyter Notebook
368
star
59

nn_pruning

Prune a model while finetuning or training.
Jupyter Notebook
360
star
60

text-clustering

Easily embed, cluster and semantically label text datasets
Python
335
star
61

speechbox

Python
328
star
62

100-times-faster-nlp

๐Ÿš€100 Times Faster Natural Language Processing in Python - iPython notebook
HTML
325
star
63

education-toolkit

Educational materials for universities
Jupyter Notebook
307
star
64

controlnet_aux

Python
306
star
65

optimum-intel

๐Ÿค— Optimum Intel: Accelerate inference with Intel optimization tools
Jupyter Notebook
295
star
66

datablations

Scaling Data-Constrained Language Models
Jupyter Notebook
293
star
67

unity-api

C#
284
star
68

open-muse

Open reproduction of MUSE for fast text2image generation.
Python
284
star
69

audio-transformers-course

The Hugging Face Course on Transformers for Audio
MDX
247
star
70

hub-docs

Docs of the Hugging Face Hub
221
star
71

lighteval

LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron.
Python
208
star
72

simulate

๐ŸŽข Creating and sharing simulation environments for embodied and synthetic data research
Python
185
star
73

ratchet

A cross-platform browser ML framework.
Rust
184
star
74

optimum-benchmark

A unified multi-backend utility for benchmarking Transformers, Timm, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
Python
183
star
75

hf_transfer

Rust
181
star
76

olm-datasets

Pipeline for pulling and processing online language model pretraining data from the web
Python
169
star
77

instruction-tuned-sd

Code for instruction-tuning Stable Diffusion.
Python
167
star
78

optimum-neuron

Easy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.
Jupyter Notebook
163
star
79

llm-swarm

Manage scalable open LLM inference endpoints in Slurm clusters
Python
156
star
80

OBELICS

Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M documents, 115B text tokens and 353M images.
Python
147
star
81

workshops

Materials for workshops on the Hugging Face ecosystem
Jupyter Notebook
146
star
82

cosmopedia

Python
138
star
83

api-inference-community

Python
131
star
84

diffusion-fast

Faster generation with text-to-image diffusion models.
Python
127
star
85

diarizers

Python
106
star
86

optimum-habana

Easy and lightning fast training of ๐Ÿค— Transformers on Habana Gaudi processor (HPU)
Python
106
star
87

sharp-transformers

A Unity plugin for using Transformers models in Unity.
C#
104
star
88

competitions

Python
101
star
89

hf-hub

Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package
Rust
93
star
90

olm-training

Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.
Python
87
star
91

fuego

[WIP] A ๐Ÿ”ฅ interface for running code in the cloud
Python
84
star
92

tune

Python
83
star
93

datasets-viewer

Viewer for the ๐Ÿค— datasets library.
Python
82
star
94

optimum-graphcore

Blazing fast training of ๐Ÿค— Transformers on Graphcore IPUs
Python
78
star
95

frp

FRP Fork
Go
73
star
96

paper-style-guide

72
star
97

block_movement_pruning

Block Sparse movement pruning
Python
70
star
98

amused

Python
68
star
99

doc-builder

The package used to build the documentation of our Hugging Face repos
Python
67
star
100

data-measurements-tool

Developing tools to automatically analyze datasets
Python
67
star