Shared knowledge distillation for robust multi‐scale super‐resolution networks

Although developments in deep learning have resulted in considerable perormance enhancements in super‐resolution (SR), they have also caused substantial increases in computational costs and memory requirements. Thus, various compression techniques, such as quantisation, pruning, and knowledge distil...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics Letters 2022-06, Vol.58 (13), p.502-504
Hauptverfasser: Na, Youngju, Kim, Hee Hyeon, Yoo, Seok Bong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Although developments in deep learning have resulted in considerable perormance enhancements in super‐resolution (SR), they have also caused substantial increases in computational costs and memory requirements. Thus, various compression techniques, such as quantisation, pruning, and knowledge distillation (KD), for single SR models have been introduced. However, multiple SR models are required in the real world to robustly reconstruct low‐resolution (LR) images of varying input sizes. Because of the limited resources, storing multiple models is impossible for mobile devices and embedded systems. In this letter, we propose a multi‐scale SR network using weight‐sharing method to effectively eliminate redundant parameters. To train our multi‐scale SR network and mitigate SR performance degradation due to knowledge confusion, we divide backpropagation into two stages. Furthermore, we propose a compression framework that distils shared knowledge within a multi‐scale SR network. We achieve a compression rate of 94% from storing multiple scales of single SR models, while only compromising 0.3 dB on average in terms of peak signal‐to‐noise ratio (PSNR).
ISSN:0013-5194
1350-911X
DOI:10.1049/ell2.12526