Analysis of Model Compression Using Knowledge Distillation
In the development of deep learning, several convolution neural network (CNN) models are designed to solve various tasks. However, these CNN models are complex and cumbersome to achieve state-of-the-art performance. The current CNN models remain to suffer from the problem of large models. Thus, mode...
Gespeichert in:
Veröffentlicht in: | IEEE access 2022, Vol.10, p.85095-85105 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In the development of deep learning, several convolution neural network (CNN) models are designed to solve various tasks. However, these CNN models are complex and cumbersome to achieve state-of-the-art performance. The current CNN models remain to suffer from the problem of large models. Thus, model compression techniques are proposed to cope with the complex CNN models. Meanwhile, the selection of compressed model to suit the user requirement significantly contributes during the deployment process. This paper analyses two model compressions, namely the layerwise and the widthwise compression. The compression techniques are implemented in the MobileNetV1 model. Then, knowledge distillation is applied to compensate for the accuracy loss of the compressed model. We demonstrate the analysis of those compressed models from various perspectives and develop several suggestions on the trade-off between the performance and the compression rate. In addition, we also show that the feature that is learned by the compressed models using knowledge distillation has better representation compared to the vanilla model. Our experiment shows that the widthwise compression on MobileNetV1 achieves a compression rate of 42.27% and the layerwise compression achieves 32.42%, respectively. Furthermore, the improvement of the compressed models using knowledge distillation is notable for the widthwise compression with the increasing accuracy above 4.71%. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2022.3197608 |