# PyTorch Static Quantization

## Introduction

Static quantization quantizes the weights and activations of the model. It allows the user to fuse activations into preceding layers where possible. Unlike dynamic quantization, where the scales and zero points were collected during inference, the scales and zero points for static quantization were determined prior to inference using a representative dataset. Therefore, static quantization is theoretically faster than dynamic quantization while the model size and memory bandwidth consumptions remain to be the same. Therefore, statically quantized models are more favorable for inference than dynamic quantization models.

In this blog post, I would like to show how to use PyTorch to do static quantizations. More details about the mathematical foundations of quantization for neural networks could be found in my article “Quantization for Neural Networks”.

## PyTorch Static Quantization

Unlike TensorFlow 2.3.0 which supports integer quantization using arbitrary bitwidth from 2 to 16, PyTorch 1.7.0 only supports 8-bit integer quantization. The workflow could be as easy as loading a pre-trained floating point model and apply a static quantization wrapper. However, without doing layer fusion, sometimes such kind of easy manipulation would not result in good model performances.

In this case, I would like to use the ResNet18 from TorchVision models as an example. I will do post-training quantization with and without layer fusion and compare their performances. The source code could also be downloaded from GitHub.

Because ResNet has skip connections addition and this addition in the TorchVision implementation uses +. We would have to replace this + (torch.add equivalence) with FloatFunctional.add (torch.add + torch.nn.Identity equivalence) in the model definition. This is because torch.nn.Identity serves as a flag for activation quantization. Without it, there will be no activation quantization for skip connection additions, resulting in erroneous quantization calibration.

In addition, we would like to test layer fusions, such as fusing Conv2D, BatchNorm, and ReLU. To do layer fusion, the torch.nn.Module name could not overlap. Otherwise it will cause erroneous quantization calibration. For example, in ordinary FP32 model, we could define one parameter-free relu = torch.nn.ReLU() and reuse this relu module everywhere. However, if we want to fuse some specific ReLUs, the ReLU modules have to be explicitly separated. So in this case, we will have to define relu1 = torch.nn.ReLU(), relu2 = torch.nn.ReLU(), etc. Sometimes, layer fusion is compulsory, since there are no quantized layer implementations corresponding to some floating point layers, such as BatchNorm.

Taken together, the modified ResNet module definition resnet.py is as follows.

The next steps are:

1. Train a floating point model or load a pre-trained floating point model.
2. Move the model to CPU and switch model to evaluation mode.
3. Apply layer fusion and check if the layer fusion results in correct model.
4. Apply torch.quantization.QuantStub() and torch.quantization.QuantStub() to the inputs and outputs, respectively.
5. Specify quantization configurations, such as symmetric quantization or asymmetric quantization, etc.
6. Prepare quantization model for post-training calibration.
7. Run post-training calibration.
8. Convert the calibrated floating point model to quantized integer model.
9. [Optional] Verify accuracies and inference performance gain.
10. Save the quantized integer model.

Note that step 4 is to ask PyTorch to specifically collect quantization statistics for the inputs and outputs, respectively. Otherwise, since PyTorch collects quantization statistics for weights and activations by default, there will be problems for the input quantization and output dequantization, since there are no quantization statistics collected for them.

The accuracy and inference performance for quantized model with layer fusions are

Note that the evaluation accuracy of ResNet18 for the CIFAR10 ($32 \times 32$) dataset is not as high as 0.95. This is because the original ResNet was designed specifically for ImageNet ($224 \times 224$) classification. To improve the accuracy of ResNet for the CIFAR10 dataset, we could change the kernel size of the first convolution from $7$ to $3$ and reduce the stride of the first convolution from $2$ to $1$.

## Conclusions

PyTorch quantization results in much faster inference performance on CPU with minimum accuracy loss.

## Extensions

To do quantization inference on CUDA, please refer to TensorRT for symmetric post-training quantization. The scale values of PyTorch symmetrically quantized models could also be used for TensorRT to generate inference engine without doing additional post-training quantization.

Lei Mao

11-28-2020

04-29-2021