Optimum Intel
Intel Neural Compressor is an open-source library enabling the usage of the most popular compression techniques such as quantization, pruning and knowledge distillation. It supports automatic accuracy-driven tuning strategies in order for users to easily generate quantized model. The users can easily apply static, dynamic and aware-training quantization approaches while giving an expected accuracy criteria. It also supports different weight pruning techniques enabling the creation of pruned model giving a predefined sparsity target.
OpenVINO is an open-source toolkit that enables high performance inference capabilities for Intel CPUs, GPUs, and special DL inference accelerators (see the full list of supported devices). It is supplied with a set of tools to optimize your models with compression techniques such as quantization, pruning and knowledge distillation. Optimum Intel provides a simple interface to optimize your Transformers and Diffusers models, convert them to the OpenVINO Intermediate Representation (IR) format and run inference using OpenVINO Runtime.
Installation
To install the latest release of pip
as follows:
Accelerator | Installation |
---|---|
Intel Neural Compressor | python -m pip install "optimum[neural-compressor]" |
OpenVINO | python -m pip install "optimum[openvino,nncf]" |
We recommend creating a virtual environment and upgrading
pip with python -m pip install --upgrade pip
.
Optimum Intel is a fast-moving project, and you may want to install from source with the following command:
python -m pip install git+https://github.com/huggingface/optimum-intel.git
or to install from source including dependencies:
python -m pip install "optimum-intel[extras]"@git+https://github.com/huggingface/optimum-intel.git
where extras
can be one or more of neural-compressor
, openvino
, nncf
.
Quick tour
Neural Compressor
Dynamic quantization can be used through the Optimum command-line interface:
optimum-cli inc quantize --model distilbert-base-cased-distilled-squad --output ./quantized_distilbert
Note that quantization is currently only supported for CPUs (only CPU backends are available), so we will not be utilizing GPUs / CUDA in this example.
To load a quantized model hosted locally or on the
from optimum.intel import INCModelForSequenceClassification
# Load the PyTorch model hosted on the hub
loaded_model_from_hub = INCModelForSequenceClassification.from_pretrained(
"Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-dynamic"
)
You can load many more quantized models hosted on the hub under the Intel organization here
.
For more details on the supported compression techniques, please refer to the documentation.
OpenVINO
Below are the examples of how to use OpenVINO and its NNCF framework to accelerate inference.
Inference:
To load a model and run inference with OpenVINO Runtime, you can just replace your AutoModelForXxx
class with the corresponding OVModelForXxx
class.
If you want to load a PyTorch checkpoint, set export=True
to convert your model to the OpenVINO IR.
- from transformers import AutoModelForSequenceClassification
+ from optimum.intel import OVModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
- model = AutoModelForSequenceClassification.from_pretrained(model_id)
+ model = OVModelForSequenceClassification.from_pretrained(model_id, export=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
cls_pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)
text = "He's a dreadful magician."
outputs = cls_pipe(text)
Post-training static quantization:
Post-training static quantization introduces an additional calibration step where data is fed through the network in order to compute the activations quantization parameters. Here is an example on how to apply static quantization on a fine-tuned DistilBERT.
from functools import partial
from optimum.intel import OVQuantizer, OVModelForSequenceClassification
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
model = AutoModelForSequenceClassification.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
def preprocess_fn(examples, tokenizer):
return tokenizer(
examples["sentence"], padding=True, truncation=True, max_length=128
)
quantizer = OVQuantizer.from_pretrained(model)
calibration_dataset = quantizer.get_calibration_dataset(
"glue",
dataset_config_name="sst2",
preprocess_function=partial(preprocess_fn, tokenizer=tokenizer),
num_samples=100,
dataset_split="train",
preprocess_batch=True,
)
# The directory where the quantized model will be saved
save_dir = "nncf_results"
# Apply static quantization and save the resulting model in the OpenVINO IR format
quantizer.quantize(calibration_dataset=calibration_dataset, save_directory=save_dir)
# Load the quantized model
optimized_model = OVModelForSequenceClassification.from_pretrained(save_dir)
Quantization-aware training:
Quantization aware training (QAT) is applied in order to simulate the effects of quantization during training, to alleviate its effects on the model’s accuracy. Here is an example on how to fine-tune a DistilBERT model on the sst-2 task while applying quantization aware training (QAT).
import evaluate
import numpy as np
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, TrainingArguments, default_data_collator
- from transformers import Trainer
+ from optimum.intel import OVConfig, OVModelForSequenceClassification, OVTrainer
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
model = AutoModelForSequenceClassification.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
dataset = load_dataset("glue", "sst2")
dataset = dataset.map(
lambda examples: tokenizer(examples["sentence"], padding=True, truncation=True, max_length=128), batched=True
)
metric = evaluate.load("glue", "sst2")
compute_metrics = lambda p: metric.compute(
predictions=np.argmax(p.predictions, axis=1), references=p.label_ids
)
# The directory where the quantized model will be saved
save_dir = "nncf_results"
# Load the default quantization configuration detailing the quantization we wish to apply
+ ov_config = OVConfig()
- trainer = Trainer(
+ trainer = OVTrainer(
model=model,
args=TrainingArguments(save_dir, num_train_epochs=1.0, do_train=True, do_eval=True),
train_dataset=dataset["train"].select(range(300)),
eval_dataset=dataset["validation"],
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=default_data_collator,
+ ov_config=ov_config,
+ task="text-classification",
)
train_result = trainer.train()
metrics = trainer.evaluate()
trainer.save_model()
+ optimized_model = OVModelForSequenceClassification.from_pretrained(save_dir)
You can find more examples in the documentation.
Running the examples
Check out the examples
directory to see how
Do not forget to install requirements for every example:
cd <example-folder>
pip install -r requirements.txt