Doubly Semi-Implicit Variational Inference
We extend the existing framework of semi-implicit variational inference (SIVI) and introduce doubly semi-implicit variational inference (DSIVI), a way to perform variational inference and learning when both the approximate posterior and the prior distribution are semi-implicit. In other words, DSIVI...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We extend the existing framework of semi-implicit variational inference
(SIVI) and introduce doubly semi-implicit variational inference (DSIVI), a way
to perform variational inference and learning when both the approximate
posterior and the prior distribution are semi-implicit. In other words, DSIVI
performs inference in models where the prior and the posterior can be expressed
as an intractable infinite mixture of some analytic density with a highly
flexible implicit mixing distribution. We provide a sandwich bound on the
evidence lower bound (ELBO) objective that can be made arbitrarily tight.
Unlike discriminator-based and kernel-based approaches to implicit variational
inference, DSIVI optimizes a proper lower bound on ELBO that is asymptotically
exact. We evaluate DSIVI on a set of problems that benefit from implicit
priors. In particular, we show that DSIVI gives rise to a simple modification
of VampPrior, the current state-of-the-art prior for variational autoencoders,
which improves its performance. |
---|---|
DOI: | 10.48550/arxiv.1810.02789 |