Knowledge Distillation with Deep Supervision
Knowledge distillation aims to enhance the performance of a lightweight student model by exploiting the knowledge from a pre-trained cumbersome teacher model. However, in the traditional knowledge distillation, teacher predictions are only used to provide the supervisory signal for the last layer of...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Knowledge distillation aims to enhance the performance of a lightweight
student model by exploiting the knowledge from a pre-trained cumbersome teacher
model. However, in the traditional knowledge distillation, teacher predictions
are only used to provide the supervisory signal for the last layer of the
student model, which may result in those shallow student layers lacking
accurate training guidance in the layer-by-layer back propagation and thus
hinders effective knowledge transfer. To address this issue, we propose
Deeply-Supervised Knowledge Distillation (DSKD), which fully utilizes class
predictions and feature maps of the teacher model to supervise the training of
shallow student layers. A loss-based weight allocation strategy is developed in
DSKD to adaptively balance the learning process of each shallow layer, so as to
further improve the student performance. Extensive experiments on CIFAR-100 and
TinyImageNet with various teacher-student models show significantly
performance, confirming the effectiveness of our proposed method. Code is
available at:
$\href{https://github.com/luoshiya/DSKD}{https://github.com/luoshiya/DSKD}$ |
---|---|
DOI: | 10.48550/arxiv.2202.07846 |