Teacher–student complementary sample contrastive distillation

Knowledge distillation (KD) is a widely adopted model compression technique for improving the performance of compact student models, by utilizing the “dark knowledge” of a large teacher model. However, previous studies have not adequately investigated the effectiveness of supervision from the teache...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2024-02, Vol.170, p.176-189
Hauptverfasser: Bao, Zhiqiang, Huang, Zhenhua, Gou, Jianping, Du, Lan, Liu, Kang, Zhou, Jingtao, Chen, Yunwen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!