CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
huggingface

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: huggingface/notebooks
Path: blob/main/examples/text_classification_quantization_inc.ipynb
Views: 2535
Kernel: Python 3 (ipykernel)

Quantizing a model during fine-tuning with Intel Neural Compressor (INC) for text classification tasks

This notebook shows how to apply quantization aware training, using the Intel Neural Compressor (INC) library, for any tasks of the GLUE benchmark. This is made possible thanks to 🤗 Optimum Intel, an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to accelerate end-to-end pipelines on a variety of Intel processors.

If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers, 🤗 Datasets and 🤗 Optimum. Uncomment the following cell and run it.

#! pip install datasets transformers optimum[neural-compressor]

Make sure your version of 🤗 Optimum is at least 1.6.0 since the functionality was introduced in that version:

from optimum.intel.version import __version__ print(__version__)
1.7.0.dev0

The GLUE Benchmark is a group of nine classification tasks on sentences or pairs of sentences which are:

  • CoLA (Corpus of Linguistic Acceptability) Determine if a sentence is grammatically correct or not.

  • MNLI (Multi-Genre Natural Language Inference) Determine if a sentence entails, contradicts or is unrelated to a given hypothesis. This dataset has two versions, one with the validation and test set coming from the same distribution, another called mismatched where the validation and test use out-of-domain data.

  • MRPC (Microsoft Research Paraphrase Corpus) Determine if two sentences are paraphrases from one another or not.

  • QNLI (Question-answering Natural Language Inference) Determine if the answer to a question is in the second sentence or not. This dataset is built from the SQuAD dataset.

  • QQP (Quora Question Pairs2) Determine if two questions are semantically equivalent or not.

  • RTE (Recognizing Textual Entailment) Determine if a sentence entails a given hypothesis or not.

  • SST-2 (Stanford Sentiment Treebank) Determine if the sentence has a positive or negative sentiment.

  • STS-B (Semantic Textual Similarity Benchmark) Determine the similarity of two sentences with a score from 1 to 5.

  • WNLI (Winograd Natural Language Inference) Determine if a sentence with an anonymous pronoun and a sentence with this pronoun replaced are entailed or not. This dataset is built from the Winograd Schema Challenge dataset.

We will see how to apply post-training static quantization on a DistilBERT model fine-tuned on the SST-2 task:

GLUE_TASKS = ["cola", "mnli", "mnli-mm", "mrpc", "qnli", "qqp", "rte", "sst2", "stsb", "wnli"] task = "sst2" model_checkpoint = "distilbert-base-uncased-finetuned-sst-2-english" batch_size = 16 max_train_samples = 200 max_eval_samples = 200

Loading the dataset

We will use the 🤗 Datasets and 🤗 Evaluate libraries to download the data and get the metric we need to use for evaluation. This can be easily done with the functions load_dataset and load.

Apart from mnli-mm being a special code, we can directly pass our task name to those functions. load_dataset will cache the dataset to avoid downloading it again the next time you run this cell.

import evaluate from datasets import load_dataset actual_task = "mnli" if task == "mnli-mm" else task dataset = load_dataset("glue", actual_task) metric = evaluate.load("glue", actual_task)
Found cached dataset glue (/home/ella/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)

Note that load has loaded the proper metric associated to your task, which is:

so the metric object only computes the one(s) needed for your task.

We also quickly upload some telemetry - this tells us which examples and software versions are getting used so we know where to prioritize our maintenance efforts. We don't collect (or care about) any personally identifiable information, but if you'd prefer not to be counted, feel free to skip this step or delete this cell entirely.

from transformers.utils import send_example_telemetry send_example_telemetry("text_classification_quantization_inc_notebook", framework="none")

Preprocessing the data

Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers Tokenizer which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.

To do all of this, we instantiate our tokenizer with the AutoTokenizer.from_pretrained method, which will ensure that:

  • we get a tokenizer that corresponds to the model architecture we want to use

  • we download the vocabulary used when pretraining this specific checkpoint

That vocabulary will be cached, so it's not downloaded again the next time we run the cell.

from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)

To preprocess our dataset, we will thus need the names of the columns containing the sentence(s). The following dictionary keeps track of the correspondence task to column names:

task_to_keys = { "cola": ("sentence", None), "mnli": ("premise", "hypothesis"), "mnli-mm": ("premise", "hypothesis"), "mrpc": ("sentence1", "sentence2"), "qnli": ("question", "sentence"), "qqp": ("question1", "question2"), "rte": ("sentence1", "sentence2"), "sst2": ("sentence", None), "stsb": ("sentence1", "sentence2"), "wnli": ("sentence1", "sentence2"), }

We can double check it does work on our current dataset:

sentence1_key, sentence2_key = task_to_keys[task] if sentence2_key is None: print(f"Sentence: {dataset['train'][0][sentence1_key]}") else: print(f"Sentence 1: {dataset['train'][0][sentence1_key]}") print(f"Sentence 2: {dataset['train'][0][sentence2_key]}")
Sentence: hide new secretions from the parental units

We can then write the function that will preprocess our samples. We just feed them to the tokenizer with the argument truncation=True. This will ensure that an input longer than what the model selected can handle will be truncated to the maximum length accepted by the model.

max_seq_length = min(128, tokenizer.model_max_length) padding = "max_length" def preprocess_function(examples): args = ( (examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key]) ) return tokenizer(*args, padding=padding, max_length=max_seq_length, truncation=True)

To apply this function on all the sentences (or pairs of sentences) in our dataset, we just use the map method of our dataset object we created earlier. This will apply the function on all the elements of all the splits in dataset, so our training, validation and testing data will be preprocessed in one single command.

encoded_dataset = dataset.map(preprocess_function, batched=True)
Loading cached processed dataset at /home/ella/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-2f40245b2230ff65.arrow Loading cached processed dataset at /home/ella/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-df1324c32baf5c62.arrow Loading cached processed dataset at /home/ella/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-83b8433647ed2f99.arrow

Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass load_from_cache_file=False in the call to map to not use the cached files and force the preprocessing to be applied again.

Note that we passed batched=True to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently.

Applying quantization on the model

Quantization aware training simulates the effects of quantization during training in order to alleviate its effects on the model's performance.

Now that our data is ready, we can download the pretrained model and fine-tune it. Since all our tasks are about sentence classification, we use the AutoModelForSequenceClassification class. Like with the tokenizer, the from_pretrained method will download and cache the model for us. The only thing we have to specify is the number of labels for our problem (which is always 2, except for STS-B which is a regression problem and MNLI where we have 3 labels):

from transformers import AutoModelForSequenceClassification, TrainingArguments, default_data_collator model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)

The INCTrainer class provides an API to train your model while combining different compression techniques such as knowledge distillation, pruning and quantization. The INCTrainer is very similar to the 🤗 Transformers Trainer, which can be replaced with minimal changes in your code. In addition to the usual

To instantiate an INCTrainer, we will need to define three more things. First, we need to create the quantization configuration describing the quantization proccess we wish to apply. Quantization will be applied on the embeddings, on the linear layers as well as on their corresponding input activations.

from neural_compressor import QuantizationAwareTrainingConfig quantization_config = QuantizationAwareTrainingConfig()

TrainingArguments is a class that contains all the attributes to customize the training. It requires one folder name, which will be used to save the checkpoints of the model, and all other arguments are optional:

metric_name = "pearson" if task == "stsb" else "matthews_correlation" if task == "cola" else "accuracy" save_directory = f"{model_checkpoint.split('/')[-1]}-finetuned-{task}" args = TrainingArguments( output_dir = save_directory, do_train=True, do_eval=False, evaluation_strategy = "epoch", save_strategy = "epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=1, weight_decay=0.01, load_best_model_at_end=True, metric_for_best_model=metric_name, )

The last thing to define for our INCTrainer is how to compute the metrics from the predictions. We need to define a function for this, which will just use the metric we loaded earlier, the only preprocessing we have to do is to take the argmax of our predicted logits (our just squeeze the last axis in the case of STS-B):

import numpy as np def compute_metrics(eval_pred): predictions, labels = eval_pred if task != "stsb": predictions = np.argmax(predictions, axis=1) else: predictions = predictions[:, 0] return metric.compute(predictions=predictions, references=labels)

Then we just need to pass all of this along with our datasets to the INCTrainer:

import copy from optimum.intel.neural_compressor import INCTrainer validation_key = "validation_mismatched" if task == "mnli-mm" else "validation_matched" if task == "mnli" else "validation" trainer = INCTrainer( model=model, quantization_config=quantization_config, task="sequence-classification", # optional : only needed to export the model to the ONNX format args=args, train_dataset=encoded_dataset["train"].select(range(max_train_samples)), eval_dataset=encoded_dataset[validation_key].select(range(max_eval_samples)), compute_metrics=compute_metrics, tokenizer=tokenizer, data_collator=default_data_collator, ) fp_model = copy.deepcopy(model)
2023-01-13 13:05:39 [WARNING] Force convert framework model to neural_compressor model.

We can now finetune our model by just calling the train method:

trainer.train()
/home/ella/miniconda3/envs/inc/lib/python3.9/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( 2023-01-13 13:05:40 [INFO] Fx trace of the entire model failed. We will conduct auto quantization
***** Running Evaluation ***** Num examples = 200 Batch size = 16 Configuration saved in distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2/checkpoint-13/config.json tokenizer config file saved in distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2/checkpoint-13/tokenizer_config.json Special tokens file saved in distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2/checkpoint-13/special_tokens_map.json Loading best model from distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2/checkpoint-13 (score: 0.755). There were unexpected keys in the checkpoint model loaded: ['best_configure']. 2023-01-13 13:07:01 [INFO] |********Mixed Precision Statistics*******| 2023-01-13 13:07:01 [INFO] +------------------------+--------+-------+ 2023-01-13 13:07:01 [INFO] | Op Type | Total | INT8 | 2023-01-13 13:07:01 [INFO] +------------------------+--------+-------+ 2023-01-13 13:07:01 [INFO] | Embedding | 2 | 2 | 2023-01-13 13:07:01 [INFO] | quantize_per_tensor | 51 | 51 | 2023-01-13 13:07:01 [INFO] | LayerNorm | 13 | 13 | 2023-01-13 13:07:01 [INFO] | dequantize | 51 | 51 | 2023-01-13 13:07:01 [INFO] | Linear | 38 | 38 | 2023-01-13 13:07:01 [INFO] | Dropout | 6 | 6 | 2023-01-13 13:07:01 [INFO] +------------------------+--------+-------+ 2023-01-13 13:07:01 [INFO] Training finished!
TrainOutput(global_step=13, training_loss=0.5614617054279034, metrics={'train_runtime': 80.3063, 'train_samples_per_second': 2.49, 'train_steps_per_second': 0.162, 'total_flos': 6623369932800.0, 'train_loss': 0.5614617054279034, 'epoch': 1.0})

We can run evaluation by just calling the evaluate method:

trainer.evaluate()
***** Running Evaluation ***** Num examples = 200 Batch size = 16
{'eval_loss': 0.5514087080955505, 'eval_accuracy': 0.795, 'eval_runtime': 8.0226, 'eval_samples_per_second': 24.929, 'eval_steps_per_second': 1.62, 'epoch': 1.0}
import os import torch def get_model_size(model): torch.save(model.state_dict(), "tmp.pt") model_size = os.path.getsize("tmp.pt") / (1024*1024) os.remove("tmp.pt") return round(model_size, 2) fp_model_size = get_model_size(fp_model) q_model_size = get_model_size(trainer.model) print(f"The full-precision model size is {round(fp_model_size)} MB while the quantized model one is {round(q_model_size)} MB.") print(f"The quantized model is {round(fp_model_size / q_model_size, 2)}x smaller than the full-precision one.")
The full-precision model size is 255 MB while the quantized model one is 65 MB. The quantized model is 3.93x smaller than the full-precision one.

To save the resulting quantized model, you can use the save_model method. By setting save_onnx_model to True, the model will be additionnaly exported to the ONNX format.

trainer.save_model(save_onnx_model=True)
Configuration saved in distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2/config.json tokenizer config file saved in distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2/tokenizer_config.json Special tokens file saved in distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2/special_tokens_map.json 2023-01-13 13:07:10 [WARNING] QDQ format requires opset_version >= 13, we reset opset_version=13 here /home/ella/miniconda3/envs/inc/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py:217: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. mask, torch.tensor(torch.finfo(scores.dtype).min) 2023-01-13 13:07:12 [INFO] Weight type: QInt8. 2023-01-13 13:07:12 [INFO] Activation type: QUInt8. WARNING:root:Please consider pre-processing before quantization. See https://github.com/microsoft/onnxruntime-inference-examples/blob/main/quantization/image_classification/cpu/ReadMe.md WARNING:root:Please consider pre-processing before quantization. See https://github.com/microsoft/onnxruntime-inference-examples/blob/main/quantization/image_classification/cpu/ReadMe.md 2023-01-13 13:07:38 [INFO] ****************************************************************************************************************** 2023-01-13 13:07:38 [INFO] The INT8 ONNX Model is exported to path: distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2/model.onnx 2023-01-13 13:07:38 [INFO] ******************************************************************************************************************

Loading the quantized model

You must instantiate you model using our INCModelForXxx[https://huggingface.co/docs/optimum/main/intel/reference_inc#optimum.intel.neural_compressor.INCModel] or ORTModelForXxx[https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort] classes to load respectively your quantized PyTorch or ONNX model hosted locally or on the 🤗 hub :

from optimum.intel.neural_compressor import INCModelForSequenceClassification from optimum.onnxruntime import ORTModelForSequenceClassification pytorch_model = INCModelForSequenceClassification.from_pretrained(save_directory) onnx_model = ORTModelForSequenceClassification.from_pretrained(save_directory)
loading configuration file distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2/config.json Model config DistilBertConfig { "_name_or_path": "distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2", "activation": "gelu", "architectures": [ "DistilBertForSequenceClassification" ], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "finetuning_task": "sst-2", "framework": "pytorch_fx", "hidden_dim": 3072, "id2label": { "0": "NEGATIVE", "1": "POSITIVE" }, "initializer_range": 0.02, "label2id": { "NEGATIVE": 0, "POSITIVE": 1 }, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "output_past": true, "pad_token_id": 0, "problem_type": "single_label_classification", "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "torch_dtype": "int8", "transformers_version": "4.25.1", "vocab_size": 30522 } loading configuration file distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2/config.json Model config DistilBertConfig { "_name_or_path": "distilbert-base-uncased-finetuned-sst-2-english", "activation": "gelu", "architectures": [ "DistilBertForSequenceClassification" ], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "finetuning_task": "sst-2", "framework": "pytorch_fx", "hidden_dim": 3072, "id2label": { "0": "NEGATIVE", "1": "POSITIVE" }, "initializer_range": 0.02, "label2id": { "NEGATIVE": 0, "POSITIVE": 1 }, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "output_past": true, "pad_token_id": 0, "problem_type": "single_label_classification", "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "torch_dtype": "int8", "transformers_version": "4.25.1", "vocab_size": 30522 } loading weights file distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2/pytorch_model.bin All model checkpoint weights were used when initializing DistilBertForSequenceClassification. All the weights of DistilBertForSequenceClassification were initialized from the model checkpoint at distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2. If your task is similar to the task the model of the checkpoint was trained on, you can already use DistilBertForSequenceClassification for predictions without further training. 2023-01-13 13:07:39 [WARNING] Please provide the example_inputs or a dataloader to get example_inputs for quantized model. 2023-01-13 13:07:39 [INFO] Fx trace of the entire model failed. We will conduct auto quantization /home/ella/miniconda3/envs/inc/lib/python3.9/site-packages/torch/ao/quantization/utils.py:287: UserWarning: must run observer before calling calculate_qparams. Returning default values. warnings.warn( loading configuration file distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2/config.json Model config DistilBertConfig { "_name_or_path": "distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2/config.json", "activation": "gelu", "architectures": [ "DistilBertForSequenceClassification" ], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "finetuning_task": "sst-2", "framework": "pytorch_fx", "hidden_dim": 3072, "id2label": { "0": "NEGATIVE", "1": "POSITIVE" }, "initializer_range": 0.02, "label2id": { "NEGATIVE": 0, "POSITIVE": 1 }, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "output_past": true, "pad_token_id": 0, "problem_type": "single_label_classification", "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "torch_dtype": "int8", "transformers_version": "4.25.1", "vocab_size": 30522 } loading file vocab.txt loading file tokenizer.json loading file added_tokens.json loading file special_tokens_map.json loading file tokenizer_config.json loading configuration file distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2/config.json Model config DistilBertConfig { "_name_or_path": "distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2", "activation": "gelu", "architectures": [ "DistilBertForSequenceClassification" ], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "finetuning_task": "sst-2", "framework": "pytorch_fx", "hidden_dim": 3072, "id2label": { "0": "NEGATIVE", "1": "POSITIVE" }, "initializer_range": 0.02, "label2id": { "NEGATIVE": 0, "POSITIVE": 1 }, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "output_past": true, "pad_token_id": 0, "problem_type": "single_label_classification", "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "torch_dtype": "int8", "transformers_version": "4.25.1", "vocab_size": 30522 } loading file vocab.txt loading file tokenizer.json loading file added_tokens.json loading file special_tokens_map.json loading file tokenizer_config.json