Transformer for Partial Differential Equations' Operator Learning
Data-driven learning of partial differential equations' solution operators has recently emerged as a promising paradigm for approximating the underlying solutions. The solution operators are usually parameterized by deep learning models that are built upon problem-specific inductive biases. An...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-04 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Li, Zijie Meidani, Kazem Amir Barati Farimani |
description | Data-driven learning of partial differential equations' solution operators has recently emerged as a promising paradigm for approximating the underlying solutions. The solution operators are usually parameterized by deep learning models that are built upon problem-specific inductive biases. An example is a convolutional or a graph neural network that exploits the local grid structure where functions' values are sampled. The attention mechanism, on the other hand, provides a flexible way to implicitly exploit the patterns within inputs, and furthermore, relationship between arbitrary query locations and inputs. In this work, we present an attention-based framework for data-driven operator learning, which we term Operator Transformer (OFormer). Our framework is built upon self-attention, cross-attention, and a set of point-wise multilayer perceptrons (MLPs), and thus it makes few assumptions on the sampling pattern of the input function or query locations. We show that the proposed framework is competitive on standard benchmark problems and can flexibly be adapted to randomly sampled input. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2671447644</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2671447644</sourcerecordid><originalsourceid>FETCH-proquest_journals_26714476443</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRwDClKzCtOyy_KTS1SAFIKAYlFJZmJOQoumWlpqUWpeWCOa2FpYklmfl6xuoJ_QWpRYglQoU9qYlFeZl46DwNrWmJOcSovlOZmUHZzDXH20C0oyi8sTS0uic_KLy3KA0rFA201NDExNzMxMSZOFQCbczmw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2671447644</pqid></control><display><type>article</type><title>Transformer for Partial Differential Equations' Operator Learning</title><source>Free E- Journals</source><creator>Li, Zijie ; Meidani, Kazem ; Amir Barati Farimani</creator><creatorcontrib>Li, Zijie ; Meidani, Kazem ; Amir Barati Farimani</creatorcontrib><description>Data-driven learning of partial differential equations' solution operators has recently emerged as a promising paradigm for approximating the underlying solutions. The solution operators are usually parameterized by deep learning models that are built upon problem-specific inductive biases. An example is a convolutional or a graph neural network that exploits the local grid structure where functions' values are sampled. The attention mechanism, on the other hand, provides a flexible way to implicitly exploit the patterns within inputs, and furthermore, relationship between arbitrary query locations and inputs. In this work, we present an attention-based framework for data-driven operator learning, which we term Operator Transformer (OFormer). Our framework is built upon self-attention, cross-attention, and a set of point-wise multilayer perceptrons (MLPs), and thus it makes few assumptions on the sampling pattern of the input function or query locations. We show that the proposed framework is competitive on standard benchmark problems and can flexibly be adapted to randomly sampled input.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Deep learning ; Graph neural networks ; Multilayer perceptrons ; Operators (mathematics) ; Partial differential equations ; Transformers</subject><ispartof>arXiv.org, 2023-04</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Li, Zijie</creatorcontrib><creatorcontrib>Meidani, Kazem</creatorcontrib><creatorcontrib>Amir Barati Farimani</creatorcontrib><title>Transformer for Partial Differential Equations' Operator Learning</title><title>arXiv.org</title><description>Data-driven learning of partial differential equations' solution operators has recently emerged as a promising paradigm for approximating the underlying solutions. The solution operators are usually parameterized by deep learning models that are built upon problem-specific inductive biases. An example is a convolutional or a graph neural network that exploits the local grid structure where functions' values are sampled. The attention mechanism, on the other hand, provides a flexible way to implicitly exploit the patterns within inputs, and furthermore, relationship between arbitrary query locations and inputs. In this work, we present an attention-based framework for data-driven operator learning, which we term Operator Transformer (OFormer). Our framework is built upon self-attention, cross-attention, and a set of point-wise multilayer perceptrons (MLPs), and thus it makes few assumptions on the sampling pattern of the input function or query locations. We show that the proposed framework is competitive on standard benchmark problems and can flexibly be adapted to randomly sampled input.</description><subject>Deep learning</subject><subject>Graph neural networks</subject><subject>Multilayer perceptrons</subject><subject>Operators (mathematics)</subject><subject>Partial differential equations</subject><subject>Transformers</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRwDClKzCtOyy_KTS1SAFIKAYlFJZmJOQoumWlpqUWpeWCOa2FpYklmfl6xuoJ_QWpRYglQoU9qYlFeZl46DwNrWmJOcSovlOZmUHZzDXH20C0oyi8sTS0uic_KLy3KA0rFA201NDExNzMxMSZOFQCbczmw</recordid><startdate>20230427</startdate><enddate>20230427</enddate><creator>Li, Zijie</creator><creator>Meidani, Kazem</creator><creator>Amir Barati Farimani</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230427</creationdate><title>Transformer for Partial Differential Equations' Operator Learning</title><author>Li, Zijie ; Meidani, Kazem ; Amir Barati Farimani</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26714476443</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Deep learning</topic><topic>Graph neural networks</topic><topic>Multilayer perceptrons</topic><topic>Operators (mathematics)</topic><topic>Partial differential equations</topic><topic>Transformers</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Zijie</creatorcontrib><creatorcontrib>Meidani, Kazem</creatorcontrib><creatorcontrib>Amir Barati Farimani</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Zijie</au><au>Meidani, Kazem</au><au>Amir Barati Farimani</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Transformer for Partial Differential Equations' Operator Learning</atitle><jtitle>arXiv.org</jtitle><date>2023-04-27</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Data-driven learning of partial differential equations' solution operators has recently emerged as a promising paradigm for approximating the underlying solutions. The solution operators are usually parameterized by deep learning models that are built upon problem-specific inductive biases. An example is a convolutional or a graph neural network that exploits the local grid structure where functions' values are sampled. The attention mechanism, on the other hand, provides a flexible way to implicitly exploit the patterns within inputs, and furthermore, relationship between arbitrary query locations and inputs. In this work, we present an attention-based framework for data-driven operator learning, which we term Operator Transformer (OFormer). Our framework is built upon self-attention, cross-attention, and a set of point-wise multilayer perceptrons (MLPs), and thus it makes few assumptions on the sampling pattern of the input function or query locations. We show that the proposed framework is competitive on standard benchmark problems and can flexibly be adapted to randomly sampled input.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-04 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2671447644 |
source | Free E- Journals |
subjects | Deep learning Graph neural networks Multilayer perceptrons Operators (mathematics) Partial differential equations Transformers |
title | Transformer for Partial Differential Equations' Operator Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T00%3A26%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Transformer%20for%20Partial%20Differential%20Equations'%20Operator%20Learning&rft.jtitle=arXiv.org&rft.au=Li,%20Zijie&rft.date=2023-04-27&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2671447644%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2671447644&rft_id=info:pmid/&rfr_iscdi=true |