Teacher–student complementary sample contrastive distillation
Knowledge distillation (KD) is a widely adopted model compression technique for improving the performance of compact student models, by utilizing the “dark knowledge” of a large teacher model. However, previous studies have not adequately investigated the effectiveness of supervision from the teache...
Gespeichert in:
Veröffentlicht in: | Neural networks 2024-02, Vol.170, p.176-189 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Schreiben Sie den ersten Kommentar!