Unsupervised Few-shot Learning via Deep Laplacian Eigenmaps
Learning a new task from a handful of examples remains an open challenge in machine learning. Despite the recent progress in few-shot learning, most methods rely on supervised pretraining or meta-learning on labeled meta-training data and cannot be applied to the case where the pretraining data is u...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Learning a new task from a handful of examples remains an open challenge in
machine learning. Despite the recent progress in few-shot learning, most
methods rely on supervised pretraining or meta-learning on labeled
meta-training data and cannot be applied to the case where the pretraining data
is unlabeled. In this study, we present an unsupervised few-shot learning
method via deep Laplacian eigenmaps. Our method learns representation from
unlabeled data by grouping similar samples together and can be intuitively
interpreted by random walks on augmented training data. We analytically show
how deep Laplacian eigenmaps avoid collapsed representation in unsupervised
learning without explicit comparison between positive and negative samples. The
proposed method significantly closes the performance gap between supervised and
unsupervised few-shot learning. Our method also achieves comparable performance
to current state-of-the-art self-supervised learning methods under linear
evaluation protocol. |
---|---|
DOI: | 10.48550/arxiv.2210.03595 |