LatentDR: Improving Model Generalization Through Sample-Aware Latent Degradation and Restoration
Despite significant advances in deep learning, models often struggle to generalize well to new, unseen domains, especially when training data is limited. To address this challenge, we propose a novel approach for distribution-aware latent augmentation that leverages the relationships across samples...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Despite significant advances in deep learning, models often struggle to
generalize well to new, unseen domains, especially when training data is
limited. To address this challenge, we propose a novel approach for
distribution-aware latent augmentation that leverages the relationships across
samples to guide the augmentation procedure. Our approach first degrades the
samples stochastically in the latent space, mapping them to augmented labels,
and then restores the samples from their corrupted versions during training.
This process confuses the classifier in the degradation step and restores the
overall class distribution of the original samples, promoting diverse
intra-class/cross-domain variability. We extensively evaluate our approach on a
diverse set of datasets and tasks, including domain generalization benchmarks
and medical imaging datasets with strong domain shift, where we show our
approach achieves significant improvements over existing methods for latent
space augmentation. We further show that our method can be flexibly adapted to
long-tail recognition tasks, demonstrating its versatility in building more
generalizable models. Code is available at
https://github.com/nerdslab/LatentDR. |
---|---|
DOI: | 10.48550/arxiv.2308.14596 |