The Exponentially Tilted Gaussian Prior for Variational Autoencoders
An important property for deep neural networks is the ability to perform robust out-of-distribution detection on previously unseen data. This property is essential for safety purposes when deploying models for real world applications. Recent studies show that probabilistic generative models can perf...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | An important property for deep neural networks is the ability to perform
robust out-of-distribution detection on previously unseen data. This property
is essential for safety purposes when deploying models for real world
applications. Recent studies show that probabilistic generative models can
perform poorly on this task, which is surprising given that they seek to
estimate the likelihood of training data. To alleviate this issue, we propose
the exponentially tilted Gaussian prior distribution for the Variational
Autoencoder (VAE) which pulls points onto the surface of a hyper-sphere in
latent space. This achieves state-of-the art results on the area under the
curve-receiver operator characteristics metric using just the log-likelihood
that the VAE naturally assigns. Because this prior is a simple modification of
the traditional VAE prior, it is faster and easier to implement than
competitive methods. |
---|---|
DOI: | 10.48550/arxiv.2111.15646 |