Dynamic Rectification Knowledge Distillation
Knowledge Distillation is a technique which aims to utilize dark knowledge to compress and transfer information from a vast, well-trained neural network (teacher model) to a smaller, less capable neural network (student model) with improved inference efficiency. This approach of distilling knowledge...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Knowledge Distillation is a technique which aims to utilize dark knowledge to
compress and transfer information from a vast, well-trained neural network
(teacher model) to a smaller, less capable neural network (student model) with
improved inference efficiency. This approach of distilling knowledge has gained
popularity as a result of the prohibitively complicated nature of such
cumbersome models for deployment on edge computing devices. Generally, the
teacher models used to teach smaller student models are cumbersome in nature
and expensive to train. To eliminate the necessity for a cumbersome teacher
model completely, we propose a simple yet effective knowledge distillation
framework that we termed Dynamic Rectification Knowledge Distillation (DR-KD).
Our method transforms the student into its own teacher, and if the self-teacher
makes wrong predictions while distilling information, the error is rectified
prior to the knowledge being distilled. Specifically, the teacher targets are
dynamically tweaked by the agency of ground-truth while distilling the
knowledge gained from traditional training. Our proposed DR-KD performs
remarkably well in the absence of a sophisticated cumbersome teacher model and
achieves comparable performance to existing state-of-the-art teacher-free
knowledge distillation frameworks when implemented by a low-cost dynamic
mannered teacher. Our approach is all-encompassing and can be utilized for any
deep neural network training that requires categorization or object
recognition. DR-KD enhances the test accuracy on Tiny ImageNet by 2.65% over
prominent baseline models, which is significantly better than any other
knowledge distillation approach while requiring no additional training costs. |
---|---|
DOI: | 10.48550/arxiv.2201.11319 |