CARTL: Cooperative Adversarially-Robust Transfer Learning
Transfer learning eases the burden of training a well-performed model from scratch, especially when training data is scarce and computation power is limited. In deep learning, a typical strategy for transfer learning is to freeze the early layers of a pre-trained model and fine-tune the rest of its...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transfer learning eases the burden of training a well-performed model from
scratch, especially when training data is scarce and computation power is
limited. In deep learning, a typical strategy for transfer learning is to
freeze the early layers of a pre-trained model and fine-tune the rest of its
layers on the target domain. Previous work focuses on the accuracy of the
transferred model but neglects the transfer of adversarial robustness. In this
work, we first show that transfer learning improves the accuracy on the target
domain but degrades the inherited robustness of the target model. To address
such a problem, we propose a novel cooperative adversarially-robust transfer
learning (CARTL) by pre-training the model via feature distance minimization
and fine-tuning the pre-trained model with non-expansive fine-tuning for target
domain tasks. Empirical results show that CARTL improves the inherited
robustness by about 28% at most compared with the baseline with the same degree
of accuracy. Furthermore, we study the relationship between the batch
normalization (BN) layers and the robustness in the context of transfer
learning, and we reveal that freezing BN layers can further boost the
robustness transfer. |
---|---|
DOI: | 10.48550/arxiv.2106.06667 |