PyTorch Quantization Aware Training

Introduction

Static quantization allows the user to generate quantized integer model that is highly efficient during inference. However, sometimes, even with careful post-training calibration, the model accuracies might be sacrificed to some extent that is not acceptable. If this is the case, post-training calibration is not sufficient to generate a quantized integer model. We would have train the model in a way so that the quantization effect has been taken into account. Quantization aware training is capable of modeling the quantization effect during training.

The mechanism of quantization aware training is simple, it places fake quantization modules, i.e., quantization and dequantization modules, at the places where quantization happens during floating-point model to quantized integer model conversion, to simulate the effects of clamping and rounding brought by integer quantization. The fake quantization modules will also monitor scales and zero points of the weights and activations. Once the quantization aware training is finished, the floating point model could be converted to quantized integer model immediately using the information stored in the fake quantization modules.

In this blog post, I would like to show how to use PyTorch to do quantization aware training. More details about the mathematical foundations of quantization for neural networks could be found in my article “Quantization for Neural Networks”.

PyTorch Quantization Aware Training

Unlike TensorFlow 2.3.0 which supports integer quantization using arbitrary bitwidth from 2 to 16, PyTorch 1.7.0 only supports 8-bit integer quantization. The workflow could be as easy as loading a pre-trained floating point model and apply a quantization aware training wrapper. However, without doing layer fusion, sometimes such kind of easy manipulation would not result in good model performances.

In this case, I will also use the ResNet18 from TorchVision models as an example. All the steps prior, to the quantization aware training steps, including layer fusion and skip connections replacement, are exactly the same as to the ones used in “PyTorch Static Quantization”. The source code could also be downloaded from GitHub.

The quantization aware training steps are also very similar to post-training calibration:

  1. Train a floating point model or load a pre-trained floating point model.
  2. Move the model to CPU and switch model to training mode.
  3. Apply layer fusion.
  4. Switch model to evaluation mode, check if the layer fusion results in correct model, and switch back to training mode.
  5. Apply torch.quantization.QuantStub() and torch.quantization.QuantStub() to the inputs and outputs, respectively.
  6. Specify quantization configurations, such as symmetric quantization or asymmetric quantization, etc.
  7. Prepare quantization model for quantization aware training.
  8. Move the model to CUDA and run quantization aware training using CUDA.
  9. Move the model to CPU and convert the quantization aware trained floating point model to quantized integer model.
  10. [Optional] Verify accuracies and inference performance gain.
  11. Save the quantized integer model.

The quantization aware training script is very similar to the one used in “PyTorch Static Quantization”:

cifar.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
import os
import random

import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms

import time
import copy
import numpy as np

from resnet import resnet18

def set_random_seeds(random_seed=0):

torch.manual_seed(random_seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(random_seed)
random.seed(random_seed)

def prepare_dataloader(num_workers=8, train_batch_size=128, eval_batch_size=256):

train_transform = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
# transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
transforms.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))
])

test_transform = transforms.Compose([
transforms.ToTensor(),
# transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
transforms.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))
])

train_set = torchvision.datasets.CIFAR10(root="data", train=True, download=True, transform=train_transform)
# We will use test set for validation and test in this project.
# Do not use test set for validation in practice!
test_set = torchvision.datasets.CIFAR10(root="data", train=False, download=True, transform=test_transform)

train_sampler = torch.utils.data.RandomSampler(train_set)
test_sampler = torch.utils.data.SequentialSampler(test_set)

train_loader = torch.utils.data.DataLoader(
dataset=train_set, batch_size=train_batch_size,
sampler=train_sampler, num_workers=num_workers)

test_loader = torch.utils.data.DataLoader(
dataset=test_set, batch_size=eval_batch_size,
sampler=test_sampler, num_workers=num_workers)

return train_loader, test_loader

def evaluate_model(model, test_loader, device, criterion=None):

model.eval()
model.to(device)

running_loss = 0
running_corrects = 0

for inputs, labels in test_loader:

inputs = inputs.to(device)
labels = labels.to(device)

outputs = model(inputs)
_, preds = torch.max(outputs, 1)

if criterion is not None:
loss = criterion(outputs, labels).item()
else:
loss = 0

# statistics
running_loss += loss * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)

eval_loss = running_loss / len(test_loader.dataset)
eval_accuracy = running_corrects / len(test_loader.dataset)

return eval_loss, eval_accuracy

def train_model(model, train_loader, test_loader, device, learning_rate=1e-1, num_epochs=200):

# The training configurations were not carefully selected.

criterion = nn.CrossEntropyLoss()

model.to(device)

# It seems that SGD optimizer is better than Adam optimizer for ResNet18 training on CIFAR10.
optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9, weight_decay=1e-4)
# scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=500)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[100, 150], gamma=0.1, last_epoch=-1)
# optimizer = optim.Adam(model.parameters(), lr=learning_rate, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)

# Evaluation
model.eval()
eval_loss, eval_accuracy = evaluate_model(model=model, test_loader=test_loader, device=device, criterion=criterion)
print("Epoch: {:02d} Eval Loss: {:.3f} Eval Acc: {:.3f}".format(-1, eval_loss, eval_accuracy))

for epoch in range(num_epochs):

# Training
model.train()

running_loss = 0
running_corrects = 0

for inputs, labels in train_loader:

inputs = inputs.to(device)
labels = labels.to(device)

# zero the parameter gradients
optimizer.zero_grad()

# forward + backward + optimize
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()

# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)

train_loss = running_loss / len(train_loader.dataset)
train_accuracy = running_corrects / len(train_loader.dataset)

# Evaluation
model.eval()
eval_loss, eval_accuracy = evaluate_model(model=model, test_loader=test_loader, device=device, criterion=criterion)

# Set learning rate scheduler
scheduler.step()

print("Epoch: {:03d} Train Loss: {:.3f} Train Acc: {:.3f} Eval Loss: {:.3f} Eval Acc: {:.3f}".format(epoch, train_loss, train_accuracy, eval_loss, eval_accuracy))

return model

def calibrate_model(model, loader, device=torch.device("cpu:0")):

model.to(device)
model.eval()

for inputs, labels in loader:
inputs = inputs.to(device)
labels = labels.to(device)
_ = model(inputs)

def measure_inference_latency(model,
device,
input_size=(1, 3, 32, 32),
num_samples=100,
num_warmups=10):

model.to(device)
model.eval()

x = torch.rand(size=input_size).to(device)

with torch.no_grad():
for _ in range(num_warmups):
_ = model(x)
torch.cuda.synchronize()

with torch.no_grad():
start_time = time.time()
for _ in range(num_samples):
_ = model(x)
torch.cuda.synchronize()
end_time = time.time()
elapsed_time = end_time - start_time
elapsed_time_ave = elapsed_time / num_samples

return elapsed_time_ave

def save_model(model, model_dir, model_filename):

if not os.path.exists(model_dir):
os.makedirs(model_dir)
model_filepath = os.path.join(model_dir, model_filename)
torch.save(model.state_dict(), model_filepath)

def load_model(model, model_filepath, device):

model.load_state_dict(torch.load(model_filepath, map_location=device))

return model

def save_torchscript_model(model, model_dir, model_filename):

if not os.path.exists(model_dir):
os.makedirs(model_dir)
model_filepath = os.path.join(model_dir, model_filename)
torch.jit.save(torch.jit.script(model), model_filepath)

def load_torchscript_model(model_filepath, device):

model = torch.jit.load(model_filepath, map_location=device)

return model

def create_model(num_classes=10):

# The number of channels in ResNet18 is divisible by 8.
# This is required for fast GEMM integer matrix multiplication.
# model = torchvision.models.resnet18(pretrained=False)
model = resnet18(num_classes=num_classes, pretrained=False)

# We would use the pretrained ResNet18 as a feature extractor.
# for param in model.parameters():
# param.requires_grad = False

# Modify the last FC layer
# num_features = model.fc.in_features
# model.fc = nn.Linear(num_features, 10)

return model

class QuantizedResNet18(nn.Module):
def __init__(self, model_fp32):
super(QuantizedResNet18, self).__init__()
# QuantStub converts tensors from floating point to quantized.
# This will only be used for inputs.
self.quant = torch.quantization.QuantStub()
# DeQuantStub converts tensors from quantized to floating point.
# This will only be used for outputs.
self.dequant = torch.quantization.DeQuantStub()
# FP32 model
self.model_fp32 = model_fp32

def forward(self, x):
# manually specify where tensors will be converted from floating
# point to quantized in the quantized model
x = self.quant(x)
x = self.model_fp32(x)
# manually specify where tensors will be converted from quantized
# to floating point in the quantized model
x = self.dequant(x)
return x

def model_equivalence(model_1, model_2, device, rtol=1e-05, atol=1e-08, num_tests=100, input_size=(1,3,32,32)):

model_1.to(device)
model_2.to(device)

for _ in range(num_tests):
x = torch.rand(size=input_size).to(device)
y1 = model_1(x).detach().cpu().numpy()
y2 = model_2(x).detach().cpu().numpy()
if np.allclose(a=y1, b=y2, rtol=rtol, atol=atol, equal_nan=False) == False:
print("Model equivalence test sample failed: ")
print(y1)
print(y2)
return False

return True

def main():

random_seed = 0
num_classes = 10
cuda_device = torch.device("cuda:0")
cpu_device = torch.device("cpu:0")

model_dir = "saved_models"
model_filename = "resnet18_cifar10.pt"
quantized_model_filename = "resnet18_quantized_cifar10.pt"
model_filepath = os.path.join(model_dir, model_filename)
quantized_model_filepath = os.path.join(model_dir, quantized_model_filename)

set_random_seeds(random_seed=random_seed)

# Create an untrained model.
model = create_model(num_classes=num_classes)

train_loader, test_loader = prepare_dataloader(num_workers=8, train_batch_size=128, eval_batch_size=256)

# Train model.
print("Training Model...")
model = train_model(model=model, train_loader=train_loader, test_loader=test_loader, device=cuda_device, learning_rate=1e-1, num_epochs=200)
# Save model.
save_model(model=model, model_dir=model_dir, model_filename=model_filename)
# Load a pretrained model.
model = load_model(model=model, model_filepath=model_filepath, device=cuda_device)
# Move the model to CPU since static quantization does not support CUDA currently.
model.to(cpu_device)
# Make a copy of the model for layer fusion
fused_model = copy.deepcopy(model)

model.train()
# The model has to be switched to training mode before any layer fusion.
# Otherwise the quantization aware training will not work correctly.
fused_model.train()

# Fuse the model in place rather manually.
fused_model = torch.quantization.fuse_modules(fused_model, [["conv1", "bn1", "relu"]], inplace=True)
for module_name, module in fused_model.named_children():
if "layer" in module_name:
for basic_block_name, basic_block in module.named_children():
torch.quantization.fuse_modules(basic_block, [["conv1", "bn1", "relu1"], ["conv2", "bn2"]], inplace=True)
for sub_block_name, sub_block in basic_block.named_children():
if sub_block_name == "downsample":
torch.quantization.fuse_modules(sub_block, [["0", "1"]], inplace=True)

# Print FP32 model.
print(model)
# Print fused model.
print(fused_model)

# Model and fused model should be equivalent.
model.eval()
fused_model.eval()
assert model_equivalence(model_1=model, model_2=fused_model, device=cpu_device, rtol=1e-03, atol=1e-06, num_tests=100, input_size=(1,3,32,32)), "Fused model is not equivalent to the original model!"

# Prepare the model for quantization aware training. This inserts observers in
# the model that will observe activation tensors during calibration.
quantized_model = QuantizedResNet18(model_fp32=fused_model)
# Using un-fused model will fail.
# Because there is no quantized layer implementation for a single batch normalization layer.
# quantized_model = QuantizedResNet18(model_fp32=model)
# Select quantization schemes from
# https://pytorch.org/docs/stable/quantization-support.html
quantization_config = torch.quantization.get_default_qconfig("fbgemm")
# Custom quantization configurations
# quantization_config = torch.quantization.default_qconfig
# quantization_config = torch.quantization.QConfig(activation=torch.quantization.MinMaxObserver.with_args(dtype=torch.quint8), weight=torch.quantization.MinMaxObserver.with_args(dtype=torch.qint8, qscheme=torch.per_tensor_symmetric))

quantized_model.qconfig = quantization_config

# Print quantization configurations
print(quantized_model.qconfig)

# https://pytorch.org/docs/stable/_modules/torch/quantization/quantize.html#prepare_qat
torch.quantization.prepare_qat(quantized_model, inplace=True)

# # Use training data for calibration.
print("Training QAT Model...")
quantized_model.train()
train_model(model=quantized_model, train_loader=train_loader, test_loader=test_loader, device=cuda_device, learning_rate=1e-3, num_epochs=10)
quantized_model.to(cpu_device)

# Using high-level static quantization wrapper
# The above steps, including torch.quantization.prepare, calibrate_model, and torch.quantization.convert, are also equivalent to
# quantized_model = torch.quantization.quantize_qat(model=quantized_model, run_fn=train_model, run_args=[train_loader, test_loader, cuda_device], mapping=None, inplace=False)

quantized_model = torch.quantization.convert(quantized_model, inplace=True)

quantized_model.eval()

# Print quantized model.
print(quantized_model)

# Save quantized model.
save_torchscript_model(model=quantized_model, model_dir=model_dir, model_filename=quantized_model_filename)

# Load quantized model.
quantized_jit_model = load_torchscript_model(model_filepath=quantized_model_filepath, device=cpu_device)

_, fp32_eval_accuracy = evaluate_model(model=model, test_loader=test_loader, device=cpu_device, criterion=None)
_, int8_eval_accuracy = evaluate_model(model=quantized_jit_model, test_loader=test_loader, device=cpu_device, criterion=None)

# Skip this assertion since the values might deviate a lot.
# assert model_equivalence(model_1=model, model_2=quantized_jit_model, device=cpu_device, rtol=1e-01, atol=1e-02, num_tests=100, input_size=(1,3,32,32)), "Quantized model deviates from the original model too much!"

print("FP32 evaluation accuracy: {:.3f}".format(fp32_eval_accuracy))
print("INT8 evaluation accuracy: {:.3f}".format(int8_eval_accuracy))

fp32_cpu_inference_latency = measure_inference_latency(model=model, device=cpu_device, input_size=(1,3,32,32), num_samples=100)
int8_cpu_inference_latency = measure_inference_latency(model=quantized_model, device=cpu_device, input_size=(1,3,32,32), num_samples=100)
int8_jit_cpu_inference_latency = measure_inference_latency(model=quantized_jit_model, device=cpu_device, input_size=(1,3,32,32), num_samples=100)
fp32_gpu_inference_latency = measure_inference_latency(model=model, device=cuda_device, input_size=(1,3,32,32), num_samples=100)

print("FP32 CPU Inference Latency: {:.2f} ms / sample".format(fp32_cpu_inference_latency * 1000))
print("FP32 CUDA Inference Latency: {:.2f} ms / sample".format(fp32_gpu_inference_latency * 1000))
print("INT8 CPU Inference Latency: {:.2f} ms / sample".format(int8_cpu_inference_latency * 1000))
print("INT8 JIT CPU Inference Latency: {:.2f} ms / sample".format(int8_jit_cpu_inference_latency * 1000))

if __name__ == "__main__":

main()

The accuracy and inference performance for quantized model with layer fusions are

1
2
3
4
5
6
FP32 evaluation accuracy: 0.869
INT8 evaluation accuracy: 0.867
FP32 CPU Inference Latency: 4.36 ms / sample
FP32 CUDA Inference Latency: 3.55 ms / sample
INT8 CPU Inference Latency: 1.85 ms / sample
INT8 JIT CPU Inference Latency: 0.41 ms / sample

Conclusions

Comparing to the accuracy and inference performance from “PyTorch Static Quantization”, PyTorch quantization aware training results in the same inference performance on CPU with better accuracy.

References

Author

Lei Mao

Posted on

12-06-2020

Updated on

04-29-2021

Licensed under


Comments