On the Generalization and Causal Explanation in Self-Supervised Learning
Self-supervised learning (SSL) methods learn from unlabeled data and achieve high generalization performance on downstream tasks. However, they may also suffer from overfitting to their training data and lose the ability to adapt to new tasks. To investigate this phenomenon, we conduct experiments o...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Self-supervised learning (SSL) methods learn from unlabeled data and achieve
high generalization performance on downstream tasks. However, they may also
suffer from overfitting to their training data and lose the ability to adapt to
new tasks. To investigate this phenomenon, we conduct experiments on various
SSL methods and datasets and make two observations: (1) Overfitting occurs
abruptly in later layers and epochs, while generalizing features are learned in
early layers for all epochs; (2) Coding rate reduction can be used as an
indicator to measure the degree of overfitting in SSL models. Based on these
observations, we propose Undoing Memorization Mechanism (UMM), a plug-and-play
method that mitigates overfitting of the pre-trained feature extractor by
aligning the feature distributions of the early and the last layers to maximize
the coding rate reduction of the last layer output. The learning process of UMM
is a bi-level optimization process. We provide a causal analysis of UMM to
explain how UMM can help the pre-trained feature extractor overcome overfitting
and recover generalization. We also demonstrate that UMM significantly improves
the generalization performance of SSL methods on various downstream tasks. |
---|---|
DOI: | 10.48550/arxiv.2410.00772 |