Improving Few-Shot Learning through Multi-task Representation Learning Theory
In this paper, we consider the framework of multi-task representation (MTR) learning where the goal is to use source tasks to learn a representation that reduces the sample complexity of solving a target task. We start by reviewing recent advances in MTR theory and show that they can provide novel i...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we consider the framework of multi-task representation (MTR)
learning where the goal is to use source tasks to learn a representation that
reduces the sample complexity of solving a target task. We start by reviewing
recent advances in MTR theory and show that they can provide novel insights for
popular meta-learning algorithms when analyzed within this framework. In
particular, we highlight a fundamental difference between gradient-based and
metric-based algorithms in practice and put forward a theoretical analysis to
explain it. Finally, we use the derived insights to improve the performance of
meta-learning methods via a new spectral-based regularization term and confirm
its efficiency through experimental studies on few-shot classification
benchmarks. To the best of our knowledge, this is the first contribution that
puts the most recent learning bounds of MTR theory into practice for the task
of few-shot classification. |
---|---|
DOI: | 10.48550/arxiv.2010.01992 |