Sample what you cant compress
For learned image representations, basic autoencoders often produce blurry results. Reconstruction quality can be improved by incorporating additional penalties such as adversarial (GAN) and perceptual losses. Arguably, these approaches lack a principled interpretation. Concurrently, in generative s...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | For learned image representations, basic autoencoders often produce blurry
results. Reconstruction quality can be improved by incorporating additional
penalties such as adversarial (GAN) and perceptual losses. Arguably, these
approaches lack a principled interpretation. Concurrently, in generative
settings diffusion has demonstrated a remarkable ability to create crisp, high
quality results and has solid theoretical underpinnings (from variational
inference to direct study as the Fisher Divergence). Our work combines
autoencoder representation learning with diffusion and is, to our knowledge,
the first to demonstrate the efficacy of jointly learning a continuous encoder
and decoder under a diffusion-based loss. We demonstrate that this approach
yields better reconstruction quality as compared to GAN-based autoencoders
while being easier to tune. We also show that the resulting representation is
easier to model with a latent diffusion model as compared to the representation
obtained from a state-of-the-art GAN-based loss. Since our decoder is
stochastic, it can generate details not encoded in the otherwise deterministic
latent representation; we therefore name our approach "Sample what you can't
compress", or SWYCC for short. |
---|---|
DOI: | 10.48550/arxiv.2409.02529 |