Tailoring Instructions to Student's Learning Levels Boosts Knowledge Distillation
It has been commonly observed that a teacher model with superior performance does not necessarily result in a stronger student, highlighting a discrepancy between current teacher training practices and effective knowledge transfer. In order to enhance the guidance of the teacher training process, we...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | It has been commonly observed that a teacher model with superior performance
does not necessarily result in a stronger student, highlighting a discrepancy
between current teacher training practices and effective knowledge transfer. In
order to enhance the guidance of the teacher training process, we introduce the
concept of distillation influence to determine the impact of distillation from
each training sample on the student's generalization ability. In this paper, we
propose Learning Good Teacher Matters (LGTM), an efficient training technique
for incorporating distillation influence into the teacher's learning process.
By prioritizing samples that are likely to enhance the student's generalization
ability, our LGTM outperforms 10 common knowledge distillation baselines on 6
text classification tasks in the GLUE benchmark. |
---|---|
DOI: | 10.48550/arxiv.2305.09651 |