Learn To Learn More Precisely
Meta-learning has been extensively applied in the domains of few-shot learning and fast adaptation, achieving remarkable performance. While Meta-learning methods like Model-Agnostic Meta-Learning (MAML) and its variants provide a good set of initial parameters for the model, the model still tends to...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Meta-learning has been extensively applied in the domains of few-shot
learning and fast adaptation, achieving remarkable performance. While
Meta-learning methods like Model-Agnostic Meta-Learning (MAML) and its variants
provide a good set of initial parameters for the model, the model still tends
to learn shortcut features, which leads to poor generalization. In this paper,
we propose the formal conception of "learn to learn more precisely", which aims
to make the model learn precise target knowledge from data and reduce the
effect of noisy knowledge, such as background and noise. To achieve this
target, we proposed a simple and effective meta-learning framework named Meta
Self-Distillation(MSD) to maximize the consistency of learned knowledge,
enhancing the models' ability to learn precise target knowledge. In the inner
loop, MSD uses different augmented views of the same support data to update the
model respectively. Then in the outer loop, MSD utilizes the same query data to
optimize the consistency of learned knowledge, enhancing the model's ability to
learn more precisely. Our experiment demonstrates that MSD exhibits remarkable
performance in few-shot classification tasks in both standard and augmented
scenarios, effectively boosting the accuracy and consistency of knowledge
learned by the model. |
---|---|
DOI: | 10.48550/arxiv.2408.04590 |