Invariant Meta Learning for Out-of-Distribution Generalization
Modern deep learning techniques have illustrated their excellent capabilities in many areas, but relies on large training data. Optimization-based meta-learning train a model on a variety tasks, such that it can solve new learning tasks using only a small number of training samples.However, these me...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Modern deep learning techniques have illustrated their excellent capabilities
in many areas, but relies on large training data. Optimization-based
meta-learning train a model on a variety tasks, such that it can solve new
learning tasks using only a small number of training samples.However, these
methods assumes that training and test dataare identically and independently
distributed. To overcome such limitation, in this paper, we propose invariant
meta learning for out-of-distribution tasks. Specifically, invariant meta
learning find invariant optimal meta-initialization,and fast adapt to
out-of-distribution tasks with regularization penalty. Extensive experiments
demonstrate the effectiveness of our proposed invariant meta learning on
out-of-distribution few-shot tasks. |
---|---|
DOI: | 10.48550/arxiv.2301.11779 |