Towards Understanding Model Quantization for Reliable Deep Neural Network Deployment

Auteurs

Hu Q., Guo Y., Cordy M., Xie X., Ma W., Papadakis M., Traon Y.L.

Référence

Proceedings - 2023 IEEE/ACM 2nd International Conference on AI Engineering - Software Engineering for AI, CAIN 2023, pp. 56-67, 2023

Description

Deep Neural Networks (DNNs) have gained considerable attention in the past decades due to their astounding performance in different applications, such as natural language modeling, self-driving assistance, and source code understanding. With rapid exploration, more and more complex DNN architectures have been proposed along with huge pre-trained model parameters. A common way to use such DNN models in user-friendly devices (e.g., mobile phones) is to perform model compression before deployment. However, recent research has demonstrated that model compression, e.g., model quantization, yields accuracy degradation as well as output disagreements when tested on unseen data. Since the unseen data always include distribution shifts and often appear in the wild, the quality and reliability of models after quantization are not ensured. In this paper, we conduct a comprehensive study to characterize and help users understand the behaviors of quantization models. Our study considers four datasets spanning from image to text, eight DNN architectures including both feed-forward neural networks and recurrent neural networks, and 42 shifted sets with both synthetic and natural distribution shifts. The results reveal that 1) data with distribution shifts lead to more disagreements than without. 2) Quantization-aware training can produce more stable models than standard, adversarial, and Mixup training. 3) Disagreements often have closer top-1 and top-2 output probabilities, and Margin is a better indicator than other uncertainty metrics to distinguish disagreements. 4) Retraining the model with disagreements has limited efficiency in removing disagreements. We release our code and models as a new benchmark for further study of model quantization.

Lien

doi:10.1109/CAIN58948.2023.00015

Partager cette page :