PyTorch Pruning
Introduction
State-of-the-art neural networks nowadays have become extremely parameterized in order to maximize the prediction accuracy. However, the model also becomes costly to run and the inference latency becomes a bottleneck. On resource-constrained edge devices, the model has a lot of restrictions and cannot be parameterized as much as we can.
Sparse neural networks could perform as good as dense neural network with respect to the prediction accuracy, and the inference latency becomes much lower theoretically due to its small model size. Neural network pruning is a method to create sparse neural networks from pre-trained dense neural networks.
In this blog post, I would like to show how to use PyTorch to do pruning. More details about the mathematical foundations of pruning for neural networks could be found in my article “Pruning for Neural Networks”.
PyTorch Pruning
To demonstrate the effectiveness of pruning, a ResNet18 model is first pre-trained on CIFAR-10 dataset, achieving a prediction accuracy of $86.9\%$. The pre-trained is further pruned and fine-tuned. The number of parameters could be reduced by $98\%$, i.e., $50\times$ compression , while maintaining the prediction accuracy within $1\%$ of the original model. The source code could be downloaded from GitHub.
The pruning is overall straightforward to do if we don’t need to customize the pruning algorithm. In this case, ResNet18 is able to achieve $50\times$ compression by using L1 unstructured pruning on weights, i.e., prune the weights that have the smallest absolute values.
1 | import os |
Caveats
Sparsity for Iterative Pruning
The prune.l1_unstructured
function uses an amount
argument which could be either the percentage of connections to prune (if it is a float between $0$ and $1$), or the absolute number of connections to prune (if it is a non-negative integer). When it is the percentage, it is the the relative percentage to the number of unmasked parameters in the module. For example, in iterative pruning, we prune the weights of a certain layer by amount=0.2
in the first iteration and further prune the same layer by amount=0.2
in the second iteration. The amount of the valid parameters after the pruning will be $1 \times (1 - 0.2) \times (1 - 0.2)$, and the sparsity of the parameters, i.e., the prune rate, in this module will be $1 - 1 \times (1 - 0.2) \times (1 - 0.2)$.
Formally, the final prune rate could be calculated using the following equation. Suppose the relative prune rate for each iteration is $\gamma$, the final prune rate, after $n$ iterations, will be
$$
1 - (1 - \gamma)^n
$$
Similarly, it is also easy to derive the final prune rate for the scenario that $\gamma$ is different in each iteration.
Local Pruning VS Grouped Pruning
Local pruning is to prune the parameters module by module. The parameters from other modules do not affect the parameters being pruned. We could specify the prune rate for each layer in the network explicitly.
Grouped pruning, sometimes referred as global pruning, grouped many different modules and prune the parameters in these modules as if they were from one module. We could also specify the prune rate explicitly. However, the prune rate for each individual layer will be different.
In our ResNet18-CIFAR10 example, group pruning performs much better than local pruning. With group pruning, we could maintain the prediction accuracy to be $86.8\%$ at a pruning rate of $98\%$, whereas with local pruning, we could only maintain the prediction accuracy to be around $82.8\%$ at a pruning rate of $94\%$.
One-Time VS Multi-Time Iterative Pruning + Fine-Tuning
Unlike one-time iterative pruning + fine-tuning which achieves the desired prune rate by pruning and fine-tuning once, multi-time iterative pruning + fine-tuning achieves the desired prune rate by pruning and fine-tuning multiple-times. For example, to achieve the desired prune rate of $98\%$, we could run pruning and fine-tuning for many iterations, achieving prune rate of $30\%$, $50\%$, $66\%$, $76\%$, $\cdots$, $98\%$ in each iteration.
Usually multi-time iterative pruning + fine-tuning is better than one-time iterative pruning + fine-tuning. However, in our ResNet18-CIFAR10 example, there is almost no difference. Using grouped pruning, both one-time iterative pruning + fine-tuning and multi-time iterative pruning + fine-tuning could maintain the prediction accuracy to be around $86.8\%$ at a prune rate of $98\%$.
Final Remarks
It seems that PyTorch has not supported converting the sparse neural networks to use sparse tensor. Once it is supported, we could really see how much faster it is to run a sparse neural network after pruning comparing to its original dense neural network before pruning.
References
PyTorch Pruning