Physics-Integrated Variational Autoencoders for Robust and Interpretable Generative Modeling
Integrating physics models within machine learning models holds considerable promise toward learning robust models with improved interpretability and abilities to extrapolate. In this work, we focus on the integration of incomplete physics models into deep generative models. In particular, we introd...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Integrating physics models within machine learning models holds considerable
promise toward learning robust models with improved interpretability and
abilities to extrapolate. In this work, we focus on the integration of
incomplete physics models into deep generative models. In particular, we
introduce an architecture of variational autoencoders (VAEs) in which a part of
the latent space is grounded by physics. A key technical challenge is to strike
a balance between the incomplete physics and trainable components such as
neural networks for ensuring that the physics part is used in a meaningful
manner. To this end, we propose a regularized learning method that controls the
effect of the trainable components and preserves the semantics of the
physics-based latent variables as intended. We not only demonstrate generative
performance improvements over a set of synthetic and real-world datasets, but
we also show that we learn robust models that can consistently extrapolate
beyond the training distribution in a meaningful manner. Moreover, we show that
we can control the generative process in an interpretable manner. |
---|---|
DOI: | 10.48550/arxiv.2102.13156 |