Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
Advances in Neural Information Processing Systems 2020 Knowledge distillation is a strategy of training a student network with guide of the soft output from a teacher network. It has been a successful method of model compression and knowledge transfer. However, currently knowledge distillation lacks...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Advances in Neural Information Processing Systems 2020 Knowledge distillation is a strategy of training a student network with guide
of the soft output from a teacher network. It has been a successful method of
model compression and knowledge transfer. However, currently knowledge
distillation lacks a convincing theoretical understanding. On the other hand,
recent finding on neural tangent kernel enables us to approximate a wide neural
network with a linear model of the network's random features. In this paper, we
theoretically analyze the knowledge distillation of a wide neural network.
First we provide a transfer risk bound for the linearized model of the network.
Then we propose a metric of the task's training difficulty, called data
inefficiency. Based on this metric, we show that for a perfect teacher, a high
ratio of teacher's soft labels can be beneficial. Finally, for the case of
imperfect teacher, we find that hard labels can correct teacher's wrong
prediction, which explains the practice of mixing hard and soft labels. |
---|---|
DOI: | 10.48550/arxiv.2010.10090 |