Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators

Neural operators, serving as physics surrogate models, have recently gained increased interest. With ever increasing problem complexity, the natural question arises: what is an efficient way to scale neural operators to larger and more complex simulations - most importantly by taking into account di...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Alkin, Benedikt, Fürst, Andreas, Schmid, Simon, Gruber, Lukas, Holzleitner, Markus, Brandstetter, Johannes
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Alkin, Benedikt
Fürst, Andreas
Schmid, Simon
Gruber, Lukas
Holzleitner, Markus
Brandstetter, Johannes
description Neural operators, serving as physics surrogate models, have recently gained increased interest. With ever increasing problem complexity, the natural question arises: what is an efficient way to scale neural operators to larger and more complex simulations - most importantly by taking into account different types of simulation datasets. This is of special interest since, akin to their numerical counterparts, different techniques are used across applications, even if the underlying dynamics of the systems are similar. Whereas the flexibility of transformers has enabled unified architectures across domains, neural operators mostly follow a problem specific design, where GNNs are commonly used for Lagrangian simulations and grid-based models predominate Eulerian simulations. We introduce Universal Physics Transformers (UPTs), an efficient and unified learning paradigm for a wide range of spatio-temporal problems. UPTs operate without grid- or particle-based latent structures, enabling flexibility and scalability across meshes and particles. UPTs efficiently propagate dynamics in the latent space, emphasized by inverse encoding and decoding techniques. Finally, UPTs allow for queries of the latent space representation at any point in space-time. We demonstrate diverse applicability and efficacy of UPTs in mesh-based fluid simulations, and steady-state Reynolds averaged Navier-Stokes simulations, and Lagrangian-based dynamics.
doi_str_mv 10.48550/arxiv.2402.12365
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_12365</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_12365</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2402_123653</originalsourceid><addsrcrecordid>eNqFjr0KwkAQhK-xEPUBrNwXMOZXxE4kwUoFYx2WcKeLl7uwF6N5e2OwtxoYhm8-IeaB78WbJPFXyG9qvTD2Qy8Io3UyFvnVUCvZoYbzvXNUOsgZjVOWq77ewg4yxkq-LD8gswypUlSSNI3u4FKiJnODo3xyDzjVkrGx7KZipFA7OfvlRCyyNN8flsN_UTNVyF3x9SgGj-j_4gMnuD4-</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators</title><source>arXiv.org</source><creator>Alkin, Benedikt ; Fürst, Andreas ; Schmid, Simon ; Gruber, Lukas ; Holzleitner, Markus ; Brandstetter, Johannes</creator><creatorcontrib>Alkin, Benedikt ; Fürst, Andreas ; Schmid, Simon ; Gruber, Lukas ; Holzleitner, Markus ; Brandstetter, Johannes</creatorcontrib><description>Neural operators, serving as physics surrogate models, have recently gained increased interest. With ever increasing problem complexity, the natural question arises: what is an efficient way to scale neural operators to larger and more complex simulations - most importantly by taking into account different types of simulation datasets. This is of special interest since, akin to their numerical counterparts, different techniques are used across applications, even if the underlying dynamics of the systems are similar. Whereas the flexibility of transformers has enabled unified architectures across domains, neural operators mostly follow a problem specific design, where GNNs are commonly used for Lagrangian simulations and grid-based models predominate Eulerian simulations. We introduce Universal Physics Transformers (UPTs), an efficient and unified learning paradigm for a wide range of spatio-temporal problems. UPTs operate without grid- or particle-based latent structures, enabling flexibility and scalability across meshes and particles. UPTs efficiently propagate dynamics in the latent space, emphasized by inverse encoding and decoding techniques. Finally, UPTs allow for queries of the latent space representation at any point in space-time. We demonstrate diverse applicability and efficacy of UPTs in mesh-based fluid simulations, and steady-state Reynolds averaged Navier-Stokes simulations, and Lagrangian-based dynamics.</description><identifier>DOI: 10.48550/arxiv.2402.12365</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Physics - Fluid Dynamics</subject><creationdate>2024-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.12365$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.12365$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Alkin, Benedikt</creatorcontrib><creatorcontrib>Fürst, Andreas</creatorcontrib><creatorcontrib>Schmid, Simon</creatorcontrib><creatorcontrib>Gruber, Lukas</creatorcontrib><creatorcontrib>Holzleitner, Markus</creatorcontrib><creatorcontrib>Brandstetter, Johannes</creatorcontrib><title>Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators</title><description>Neural operators, serving as physics surrogate models, have recently gained increased interest. With ever increasing problem complexity, the natural question arises: what is an efficient way to scale neural operators to larger and more complex simulations - most importantly by taking into account different types of simulation datasets. This is of special interest since, akin to their numerical counterparts, different techniques are used across applications, even if the underlying dynamics of the systems are similar. Whereas the flexibility of transformers has enabled unified architectures across domains, neural operators mostly follow a problem specific design, where GNNs are commonly used for Lagrangian simulations and grid-based models predominate Eulerian simulations. We introduce Universal Physics Transformers (UPTs), an efficient and unified learning paradigm for a wide range of spatio-temporal problems. UPTs operate without grid- or particle-based latent structures, enabling flexibility and scalability across meshes and particles. UPTs efficiently propagate dynamics in the latent space, emphasized by inverse encoding and decoding techniques. Finally, UPTs allow for queries of the latent space representation at any point in space-time. We demonstrate diverse applicability and efficacy of UPTs in mesh-based fluid simulations, and steady-state Reynolds averaged Navier-Stokes simulations, and Lagrangian-based dynamics.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Physics - Fluid Dynamics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjr0KwkAQhK-xEPUBrNwXMOZXxE4kwUoFYx2WcKeLl7uwF6N5e2OwtxoYhm8-IeaB78WbJPFXyG9qvTD2Qy8Io3UyFvnVUCvZoYbzvXNUOsgZjVOWq77ewg4yxkq-LD8gswypUlSSNI3u4FKiJnODo3xyDzjVkrGx7KZipFA7OfvlRCyyNN8flsN_UTNVyF3x9SgGj-j_4gMnuD4-</recordid><startdate>20240219</startdate><enddate>20240219</enddate><creator>Alkin, Benedikt</creator><creator>Fürst, Andreas</creator><creator>Schmid, Simon</creator><creator>Gruber, Lukas</creator><creator>Holzleitner, Markus</creator><creator>Brandstetter, Johannes</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240219</creationdate><title>Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators</title><author>Alkin, Benedikt ; Fürst, Andreas ; Schmid, Simon ; Gruber, Lukas ; Holzleitner, Markus ; Brandstetter, Johannes</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2402_123653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Physics - Fluid Dynamics</topic><toplevel>online_resources</toplevel><creatorcontrib>Alkin, Benedikt</creatorcontrib><creatorcontrib>Fürst, Andreas</creatorcontrib><creatorcontrib>Schmid, Simon</creatorcontrib><creatorcontrib>Gruber, Lukas</creatorcontrib><creatorcontrib>Holzleitner, Markus</creatorcontrib><creatorcontrib>Brandstetter, Johannes</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Alkin, Benedikt</au><au>Fürst, Andreas</au><au>Schmid, Simon</au><au>Gruber, Lukas</au><au>Holzleitner, Markus</au><au>Brandstetter, Johannes</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators</atitle><date>2024-02-19</date><risdate>2024</risdate><abstract>Neural operators, serving as physics surrogate models, have recently gained increased interest. With ever increasing problem complexity, the natural question arises: what is an efficient way to scale neural operators to larger and more complex simulations - most importantly by taking into account different types of simulation datasets. This is of special interest since, akin to their numerical counterparts, different techniques are used across applications, even if the underlying dynamics of the systems are similar. Whereas the flexibility of transformers has enabled unified architectures across domains, neural operators mostly follow a problem specific design, where GNNs are commonly used for Lagrangian simulations and grid-based models predominate Eulerian simulations. We introduce Universal Physics Transformers (UPTs), an efficient and unified learning paradigm for a wide range of spatio-temporal problems. UPTs operate without grid- or particle-based latent structures, enabling flexibility and scalability across meshes and particles. UPTs efficiently propagate dynamics in the latent space, emphasized by inverse encoding and decoding techniques. Finally, UPTs allow for queries of the latent space representation at any point in space-time. We demonstrate diverse applicability and efficacy of UPTs in mesh-based fluid simulations, and steady-state Reynolds averaged Navier-Stokes simulations, and Lagrangian-based dynamics.</abstract><doi>10.48550/arxiv.2402.12365</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2402.12365
ispartof
issn
language eng
recordid cdi_arxiv_primary_2402_12365
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
Physics - Fluid Dynamics
title Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T18%3A15%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Universal%20Physics%20Transformers:%20A%20Framework%20For%20Efficiently%20Scaling%20Neural%20Operators&rft.au=Alkin,%20Benedikt&rft.date=2024-02-19&rft_id=info:doi/10.48550/arxiv.2402.12365&rft_dat=%3Carxiv_GOX%3E2402_12365%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true