Embedded-model flows: Combining the inductive biases of model-free deep learning and explicit probabilistic modeling
Normalizing flows have shown great success as general-purpose density estimators. However, many real world applications require the use of domain-specific knowledge, which normalizing flows cannot readily incorporate. We propose embedded-model flows (EMF), which alternate general-purpose transformat...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Normalizing flows have shown great success as general-purpose density
estimators. However, many real world applications require the use of
domain-specific knowledge, which normalizing flows cannot readily incorporate.
We propose embedded-model flows (EMF), which alternate general-purpose
transformations with structured layers that embed domain-specific inductive
biases. These layers are automatically constructed by converting user-specified
differentiable probabilistic models into equivalent bijective transformations.
We also introduce gated structured layers, which allow bypassing the parts of
the models that fail to capture the statistics of the data. We demonstrate that
EMFs can be used to induce desirable properties such as multimodality,
hierarchical coupling and continuity. Furthermore, we show that EMFs enable a
high performance form of variational inference where the structure of the prior
model is embedded in the variational architecture. In our experiments, we show
that this approach outperforms state-of-the-art methods in common structured
inference problems. |
---|---|
DOI: | 10.48550/arxiv.2110.06021 |