Universal Marginalizer for Amortised Inference and Embedding of Generative Models
Probabilistic graphical models are powerful tools which allow us to formalise our knowledge about the world and reason about its inherent uncertainty. There exist a considerable number of methods for performing inference in probabilistic graphical models; however, they can be computationally costly...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Probabilistic graphical models are powerful tools which allow us to formalise
our knowledge about the world and reason about its inherent uncertainty. There
exist a considerable number of methods for performing inference in
probabilistic graphical models; however, they can be computationally costly due
to significant time burden and/or storage requirements; or they lack
theoretical guarantees of convergence and accuracy when applied to large scale
graphical models. To this end, we propose the Universal Marginaliser Importance
Sampler (UM-IS) -- a hybrid inference scheme that combines the flexibility of a
deep neural network trained on samples from the model and inherits the
asymptotic guarantees of importance sampling. We show how combining samples
drawn from the graphical model with an appropriate masking function allows us
to train a single neural network to approximate any of the corresponding
conditional marginal distributions, and thus amortise the cost of inference. We
also show that the graph embeddings can be applied for tasks such as:
clustering, classification and interpretation of relationships between the
nodes. Finally, we benchmark the method on a large graph (>1000 nodes), showing
that UM-IS outperforms sampling-based methods by a large margin while being
computationally efficient. |
---|---|
DOI: | 10.48550/arxiv.1811.04727 |