Neural Variational Inference and Learning in Belief Networks
Proceedings of the 31st International Conference on Machine Learning (ICML), JMLR: W&CP volume 32, 2014 pgs 1791-1799 Highly expressive directed latent variable models, such as sigmoid belief networks, are difficult to train on large datasets because exact inference in them is intractable and no...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Proceedings of the 31st International Conference on Machine
Learning (ICML), JMLR: W&CP volume 32, 2014 pgs 1791-1799 Highly expressive directed latent variable models, such as sigmoid belief
networks, are difficult to train on large datasets because exact inference in
them is intractable and none of the approximate inference methods that have
been applied to them scale well. We propose a fast non-iterative approximate
inference method that uses a feedforward network to implement efficient exact
sampling from the variational posterior. The model and this inference network
are trained jointly by maximizing a variational lower bound on the
log-likelihood. Although the naive estimator of the inference model gradient is
too high-variance to be useful, we make it practical by applying several
straightforward model-independent variance reduction techniques. Applying our
approach to training sigmoid belief networks and deep autoregressive networks,
we show that it outperforms the wake-sleep algorithm on MNIST and achieves
state-of-the-art results on the Reuters RCV1 document dataset. |
---|---|
DOI: | 10.48550/arxiv.1402.0030 |