Diffeomorphic Latent Neural Operators for Data-Efficient Learning of Solutions to Partial Differential Equations
A computed approximation of the solution operator to a system of partial differential equations (PDEs) is needed in various areas of science and engineering. Neural operators have been shown to be quite effective at predicting these solution generators after training on high-fidelity ground truth da...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A computed approximation of the solution operator to a system of partial
differential equations (PDEs) is needed in various areas of science and
engineering. Neural operators have been shown to be quite effective at
predicting these solution generators after training on high-fidelity ground
truth data (e.g. numerical simulations). However, in order to generalize well
to unseen spatial domains, neural operators must be trained on an extensive
amount of geometrically varying data samples that may not be feasible to
acquire or simulate in certain contexts (e.g., patient-specific medical data,
large-scale computationally intensive simulations.) We propose that in order to
learn a PDE solution operator that can generalize across multiple domains
without needing to sample enough data expressive enough for all possible
geometries, we can train instead a latent neural operator on just a few ground
truth solution fields diffeomorphically mapped from different geometric/spatial
domains to a fixed reference configuration. Furthermore, the form of the
solutions is dependent on the choice of mapping to and from the reference
domain. We emphasize that preserving properties of the differential operator
when constructing these mappings can significantly reduce the data requirement
for achieving an accurate model due to the regularity of the solution fields
that the latent neural operator is training on. We provide motivating numerical
experimentation that demonstrates an extreme case of this consideration by
exploiting the conformal invariance of the Laplacian |
---|---|
DOI: | 10.48550/arxiv.2411.18014 |