Arbitrary conditional inference in variational autoencoders via fast prior network training

Variational Autoencoders (VAEs) are a popular generative model, but one in which conditional inference can be challenging. If the decomposition into query and evidence variables is fixed, conditionally trained VAEs provide an attractive solution. However, to efficiently support arbitrary queries ove...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Machine learning 2022-07, Vol.111 (7), p.2537-2559
Hauptverfasser: Wu, Ga, Domke, Justin, Sanner, Scott
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Variational Autoencoders (VAEs) are a popular generative model, but one in which conditional inference can be challenging. If the decomposition into query and evidence variables is fixed, conditionally trained VAEs provide an attractive solution. However, to efficiently support arbitrary queries over pre-trained VAEs when the query and evidence are not known in advance, one is generally reduced to MCMC sampling methods that can suffer from long mixing times. In this paper, we propose an idea of efficiently training small conditional prior networks to approximate the latent distribution of the VAE after conditioning on an evidence assignment; this permits generating query samples without retraining the full VAE. We experimentally evaluate three variations of conditional prior networks showing that (i) they can be quickly optimized for different decompositions of evidence and query and (ii) they quantitatively and qualitatively outperform existing state-of-the-art methods for conditional inference in pre-trained VAEs.
ISSN:0885-6125
1573-0565
DOI:10.1007/s10994-022-06171-2