Accelerating PDE-constrained Inverse Solutions with Deep Learning and Reduced Order Models

Inverse problems are pervasive mathematical methods in inferring knowledge from observational and experimental data by leveraging simulations and models. Unlike direct inference methods, inverse problem approaches typically require many forward model solves usually governed by Partial Differential E...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2019-12
Hauptverfasser: Sheriffdeen, Sheroze, Ragusa, Jean C, Morel, Jim E, Adams, Marvin L, Bui-Thanh, Tan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Sheriffdeen, Sheroze
Ragusa, Jean C
Morel, Jim E
Adams, Marvin L
Bui-Thanh, Tan
description Inverse problems are pervasive mathematical methods in inferring knowledge from observational and experimental data by leveraging simulations and models. Unlike direct inference methods, inverse problem approaches typically require many forward model solves usually governed by Partial Differential Equations (PDEs). This a crucial bottleneck in determining the feasibility of such methods. While machine learning (ML) methods, such as deep neural networks (DNNs), can be employed to learn nonlinear forward models, designing a network architecture that preserves accuracy while generalizing to new parameter regimes is a daunting task. Furthermore, due to the computation-expensive nature of forward models, state-of-the-art black-box ML methods would require an unrealistic amount of work in order to obtain an accurate surrogate model. On the other hand, standard Reduced-Order Models (ROMs) accurately capture supposedly important physics of the forward model in the reduced subspaces, but otherwise could be inaccurate elsewhere. In this paper, we propose to enlarge the validity of ROMs and hence improve the accuracy outside the reduced subspaces by incorporating a data-driven ML technique. In particular, we focus on a goal-oriented approach that substantially improves the accuracy of reduced models by learning the error between the forward model and the ROM outputs. Once an ML-enhanced ROM is constructed it can accelerate the performance of solving many-query problems in parametrized forward and inverse problems. Numerical results for inverse problems governed by elliptic PDEs and parametrized neutron transport equations will be presented to support our approach.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2328982684</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2328982684</sourcerecordid><originalsourceid>FETCH-proquest_journals_23289826843</originalsourceid><addsrcrecordid>eNqNi70KwjAYAIMgWLTvEHAu1KStcRRbUVAUdXIpIfnUlJLU_OjrW8EHcLrh7gYoIpTOEpYRMkKxc02apqSYkzynEbouhYAWLPdK3_GxrBJhtPOWKw0Sb_ULrAN8Nm3wqhf4rfwDlwAd3gG3-jtxLfEJZBD9cLASLN4bCa2boOGNtw7iH8douq4uq03SWfMM4HzdmGB1r2pCCVswUrCM_ld9AP3WQk4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2328982684</pqid></control><display><type>article</type><title>Accelerating PDE-constrained Inverse Solutions with Deep Learning and Reduced Order Models</title><source>Freely Accessible Journals</source><creator>Sheriffdeen, Sheroze ; Ragusa, Jean C ; Morel, Jim E ; Adams, Marvin L ; Bui-Thanh, Tan</creator><creatorcontrib>Sheriffdeen, Sheroze ; Ragusa, Jean C ; Morel, Jim E ; Adams, Marvin L ; Bui-Thanh, Tan</creatorcontrib><description>Inverse problems are pervasive mathematical methods in inferring knowledge from observational and experimental data by leveraging simulations and models. Unlike direct inference methods, inverse problem approaches typically require many forward model solves usually governed by Partial Differential Equations (PDEs). This a crucial bottleneck in determining the feasibility of such methods. While machine learning (ML) methods, such as deep neural networks (DNNs), can be employed to learn nonlinear forward models, designing a network architecture that preserves accuracy while generalizing to new parameter regimes is a daunting task. Furthermore, due to the computation-expensive nature of forward models, state-of-the-art black-box ML methods would require an unrealistic amount of work in order to obtain an accurate surrogate model. On the other hand, standard Reduced-Order Models (ROMs) accurately capture supposedly important physics of the forward model in the reduced subspaces, but otherwise could be inaccurate elsewhere. In this paper, we propose to enlarge the validity of ROMs and hence improve the accuracy outside the reduced subspaces by incorporating a data-driven ML technique. In particular, we focus on a goal-oriented approach that substantially improves the accuracy of reduced models by learning the error between the forward model and the ROM outputs. Once an ML-enhanced ROM is constructed it can accelerate the performance of solving many-query problems in parametrized forward and inverse problems. Numerical results for inverse problems governed by elliptic PDEs and parametrized neutron transport equations will be presented to support our approach.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accuracy ; Computer simulation ; Deep learning ; Inverse problems ; Machine learning ; Mathematical models ; Model accuracy ; Neural networks ; Parameterization ; Partial differential equations ; Reduced order models ; Subspaces ; Transport equations</subject><ispartof>arXiv.org, 2019-12</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Sheriffdeen, Sheroze</creatorcontrib><creatorcontrib>Ragusa, Jean C</creatorcontrib><creatorcontrib>Morel, Jim E</creatorcontrib><creatorcontrib>Adams, Marvin L</creatorcontrib><creatorcontrib>Bui-Thanh, Tan</creatorcontrib><title>Accelerating PDE-constrained Inverse Solutions with Deep Learning and Reduced Order Models</title><title>arXiv.org</title><description>Inverse problems are pervasive mathematical methods in inferring knowledge from observational and experimental data by leveraging simulations and models. Unlike direct inference methods, inverse problem approaches typically require many forward model solves usually governed by Partial Differential Equations (PDEs). This a crucial bottleneck in determining the feasibility of such methods. While machine learning (ML) methods, such as deep neural networks (DNNs), can be employed to learn nonlinear forward models, designing a network architecture that preserves accuracy while generalizing to new parameter regimes is a daunting task. Furthermore, due to the computation-expensive nature of forward models, state-of-the-art black-box ML methods would require an unrealistic amount of work in order to obtain an accurate surrogate model. On the other hand, standard Reduced-Order Models (ROMs) accurately capture supposedly important physics of the forward model in the reduced subspaces, but otherwise could be inaccurate elsewhere. In this paper, we propose to enlarge the validity of ROMs and hence improve the accuracy outside the reduced subspaces by incorporating a data-driven ML technique. In particular, we focus on a goal-oriented approach that substantially improves the accuracy of reduced models by learning the error between the forward model and the ROM outputs. Once an ML-enhanced ROM is constructed it can accelerate the performance of solving many-query problems in parametrized forward and inverse problems. Numerical results for inverse problems governed by elliptic PDEs and parametrized neutron transport equations will be presented to support our approach.</description><subject>Accuracy</subject><subject>Computer simulation</subject><subject>Deep learning</subject><subject>Inverse problems</subject><subject>Machine learning</subject><subject>Mathematical models</subject><subject>Model accuracy</subject><subject>Neural networks</subject><subject>Parameterization</subject><subject>Partial differential equations</subject><subject>Reduced order models</subject><subject>Subspaces</subject><subject>Transport equations</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNi70KwjAYAIMgWLTvEHAu1KStcRRbUVAUdXIpIfnUlJLU_OjrW8EHcLrh7gYoIpTOEpYRMkKxc02apqSYkzynEbouhYAWLPdK3_GxrBJhtPOWKw0Sb_ULrAN8Nm3wqhf4rfwDlwAd3gG3-jtxLfEJZBD9cLASLN4bCa2boOGNtw7iH8douq4uq03SWfMM4HzdmGB1r2pCCVswUrCM_ld9AP3WQk4</recordid><startdate>20191217</startdate><enddate>20191217</enddate><creator>Sheriffdeen, Sheroze</creator><creator>Ragusa, Jean C</creator><creator>Morel, Jim E</creator><creator>Adams, Marvin L</creator><creator>Bui-Thanh, Tan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20191217</creationdate><title>Accelerating PDE-constrained Inverse Solutions with Deep Learning and Reduced Order Models</title><author>Sheriffdeen, Sheroze ; Ragusa, Jean C ; Morel, Jim E ; Adams, Marvin L ; Bui-Thanh, Tan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_23289826843</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Accuracy</topic><topic>Computer simulation</topic><topic>Deep learning</topic><topic>Inverse problems</topic><topic>Machine learning</topic><topic>Mathematical models</topic><topic>Model accuracy</topic><topic>Neural networks</topic><topic>Parameterization</topic><topic>Partial differential equations</topic><topic>Reduced order models</topic><topic>Subspaces</topic><topic>Transport equations</topic><toplevel>online_resources</toplevel><creatorcontrib>Sheriffdeen, Sheroze</creatorcontrib><creatorcontrib>Ragusa, Jean C</creatorcontrib><creatorcontrib>Morel, Jim E</creatorcontrib><creatorcontrib>Adams, Marvin L</creatorcontrib><creatorcontrib>Bui-Thanh, Tan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sheriffdeen, Sheroze</au><au>Ragusa, Jean C</au><au>Morel, Jim E</au><au>Adams, Marvin L</au><au>Bui-Thanh, Tan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Accelerating PDE-constrained Inverse Solutions with Deep Learning and Reduced Order Models</atitle><jtitle>arXiv.org</jtitle><date>2019-12-17</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>Inverse problems are pervasive mathematical methods in inferring knowledge from observational and experimental data by leveraging simulations and models. Unlike direct inference methods, inverse problem approaches typically require many forward model solves usually governed by Partial Differential Equations (PDEs). This a crucial bottleneck in determining the feasibility of such methods. While machine learning (ML) methods, such as deep neural networks (DNNs), can be employed to learn nonlinear forward models, designing a network architecture that preserves accuracy while generalizing to new parameter regimes is a daunting task. Furthermore, due to the computation-expensive nature of forward models, state-of-the-art black-box ML methods would require an unrealistic amount of work in order to obtain an accurate surrogate model. On the other hand, standard Reduced-Order Models (ROMs) accurately capture supposedly important physics of the forward model in the reduced subspaces, but otherwise could be inaccurate elsewhere. In this paper, we propose to enlarge the validity of ROMs and hence improve the accuracy outside the reduced subspaces by incorporating a data-driven ML technique. In particular, we focus on a goal-oriented approach that substantially improves the accuracy of reduced models by learning the error between the forward model and the ROM outputs. Once an ML-enhanced ROM is constructed it can accelerate the performance of solving many-query problems in parametrized forward and inverse problems. Numerical results for inverse problems governed by elliptic PDEs and parametrized neutron transport equations will be presented to support our approach.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2019-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_2328982684
source Freely Accessible Journals
subjects Accuracy
Computer simulation
Deep learning
Inverse problems
Machine learning
Mathematical models
Model accuracy
Neural networks
Parameterization
Partial differential equations
Reduced order models
Subspaces
Transport equations
title Accelerating PDE-constrained Inverse Solutions with Deep Learning and Reduced Order Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T04%3A51%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Accelerating%20PDE-constrained%20Inverse%20Solutions%20with%20Deep%20Learning%20and%20Reduced%20Order%20Models&rft.jtitle=arXiv.org&rft.au=Sheriffdeen,%20Sheroze&rft.date=2019-12-17&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2328982684%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2328982684&rft_id=info:pmid/&rfr_iscdi=true