Minimalistic Unsupervised Learning with the Sparse Manifold Transform
The Eleventh International Conference on Learning Representations (2023) We describe a minimalistic and interpretable method for unsupervised learning, without resorting to data augmentation, hyperparameter tuning, or other engineering designs, that achieves performance close to the SOTA SSL methods...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The Eleventh International Conference on Learning Representations
(2023) We describe a minimalistic and interpretable method for unsupervised
learning, without resorting to data augmentation, hyperparameter tuning, or
other engineering designs, that achieves performance close to the SOTA SSL
methods. Our approach leverages the sparse manifold transform, which unifies
sparse coding, manifold learning, and slow feature analysis. With a one-layer
deterministic sparse manifold transform, one can achieve 99.3% KNN top-1
accuracy on MNIST, 81.1% KNN top-1 accuracy on CIFAR-10 and 53.2% on CIFAR-100.
With a simple gray-scale augmentation, the model gets 83.2% KNN top-1 accuracy
on CIFAR-10 and 57% on CIFAR-100. These results significantly close the gap
between simplistic "white-box" methods and the SOTA methods. Additionally, we
provide visualization to explain how an unsupervised representation transform
is formed. The proposed method is closely connected to latent-embedding
self-supervised methods and can be treated as the simplest form of VICReg.
Though there remains a small performance gap between our simple constructive
model and SOTA methods, the evidence points to this as a promising direction
for achieving a principled and white-box approach to unsupervised learning. |
---|---|
DOI: | 10.48550/arxiv.2209.15261 |