On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning
Model-agnostic meta-learning (MAML) has emerged as one of the most successful meta-learning techniques in few-shot learning. It enables us to learn a meta-initialization} of model parameters (that we call meta-model) to rapidly adapt to new tasks using a small amount of labeled training data. Despit...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Model-agnostic meta-learning (MAML) has emerged as one of the most successful
meta-learning techniques in few-shot learning. It enables us to learn a
meta-initialization} of model parameters (that we call meta-model) to rapidly
adapt to new tasks using a small amount of labeled training data. Despite the
generalization power of the meta-model, it remains elusive that how adversarial
robustness can be maintained by MAML in few-shot learning. In addition to
generalization, robustness is also desired for a meta-model to defend
adversarial examples (attacks). Toward promoting adversarial robustness in
MAML, we first study WHEN a robustness-promoting regularization should be
incorporated, given the fact that MAML adopts a bi-level (fine-tuning vs.
meta-update) learning procedure. We show that robustifying the meta-update
stage is sufficient to make robustness adapted to the task-specific fine-tuning
stage even if the latter uses a standard training protocol. We also make
additional justification on the acquired robustness adaptation by peering into
the interpretability of neurons' activation maps. Furthermore, we investigate
HOW robust regularization can efficiently be designed in MAML. We propose a
general but easily-optimized robustness-regularized meta-learning framework,
which allows the use of unlabeled data augmentation, fast adversarial attack
generation, and computationally-light fine-tuning. In particular, we for the
first time show that the auxiliary contrastive learning task can enhance the
adversarial robustness of MAML. Finally, extensive experiments are conducted to
demonstrate the effectiveness of our proposed methods in robust few-shot
learning. |
---|---|
DOI: | 10.48550/arxiv.2102.10454 |