Self-Supervised Knowledge Transfer via Loosely Supervised Auxiliary Tasks
Knowledge transfer using convolutional neural networks (CNNs) can help efficiently train a CNN with fewer parameters or maximize the generalization performance under limited supervision. To enable a more efficient transfer of pretrained knowledge under relaxed conditions, we propose a simple yet pow...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Knowledge transfer using convolutional neural networks (CNNs) can help
efficiently train a CNN with fewer parameters or maximize the generalization
performance under limited supervision. To enable a more efficient transfer of
pretrained knowledge under relaxed conditions, we propose a simple yet powerful
knowledge transfer methodology without any restrictions regarding the network
structure or dataset used, namely self-supervised knowledge transfer (SSKT),
via loosely supervised auxiliary tasks. For this, we devise a training
methodology that transfers previously learned knowledge to the current training
process as an auxiliary task for the target task through self-supervision using
a soft label. The SSKT is independent of the network structure and dataset, and
is trained differently from existing knowledge transfer methods; hence, it has
an advantage in that the prior knowledge acquired from various tasks can be
naturally transferred during the training process to the target task.
Furthermore, it can improve the generalization performance on most datasets
through the proposed knowledge transfer between different problem domains from
multiple source networks. SSKT outperforms the other transfer learning methods
(KD, DML, and MAXL) through experiments under various knowledge transfer
settings. The source code will be made available to the public. |
---|---|
DOI: | 10.48550/arxiv.2110.12696 |