A Framework for Developing Systematic Testbeds for Multifidelity Optimization Techniques

Multifidelity (MF) models abound in simulation-based engineering. Many MF strategies have been proposed to improve the efficiency in engineering processes, especially in design optimization. When it comes to assessing the performance of MF optimization techniques, existing practice usually relies on...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of verification, validation, and uncertainty quantification validation, and uncertainty quantification, 2024-06, Vol.9 (2)
Hauptverfasser: Tao, Siyu, Sharma, Chaitra, Devanathan, Srikanth
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 2
container_start_page
container_title Journal of verification, validation, and uncertainty quantification
container_volume 9
creator Tao, Siyu
Sharma, Chaitra
Devanathan, Srikanth
description Multifidelity (MF) models abound in simulation-based engineering. Many MF strategies have been proposed to improve the efficiency in engineering processes, especially in design optimization. When it comes to assessing the performance of MF optimization techniques, existing practice usually relies on test cases involving contrived MF models of seemingly random math functions, due to limited access to real-world MF models. While it is acceptable to use contrived MF models, these models are often manually written up rather than created in a systematic manner. This gives rise to the potential pitfall that the test MF models may be not representative of general scenarios. We propose a framework to generate test MF models systematically and characterize MF optimization techniques' performances comprehensively. In our framework, the MF models are generated based on given high-fidelity (HF) models and come with two parameters to control their fidelity levels and allow model randomization. In our testing process, MF case problems are systematically formulated using our model creation method. Running the given MF optimization technique on these problems produces what we call “savings curves” that characterize the technique's performance similarly to how receiver operating characteristic (ROC) curves characterize machine learning classifiers. Our test results also allow plotting “optimality curves” that serve similar functionality to savings curves in certain types of problems. The flexibility of our MF model creation facilitates the development of testing processes for general MF problem scenarios, and our framework can be easily extended to other MF applications than optimization.
doi_str_mv 10.1115/1.4065719
format Article
fullrecord <record><control><sourceid>asme_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1115_1_4065719</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1200895</sourcerecordid><originalsourceid>FETCH-LOGICAL-a140t-5e8646433e55b20cd869a2f182ed1fe57c81bb291353118275a6e582582bc13a3</originalsourceid><addsrcrecordid>eNo90E1LAzEQBuAgCpbag3cPe_WwNZNsstljqbYKlR6s4C1kd2c1dT9qklXqr3e1RRiYYXgYhpeQS6BTABA3ME2oFClkJ2TEeJrGDKQ8_Z-FOicT77eUUpAACYMReZlFC2ca_Orce1R1LrrFT6y7nW1fo6e9D9iYYItogz7kWPo_8tjXwVa2xNqGfbTeBdvY74F17eCKt9Z-9OgvyFllao-TYx-T58XdZn4fr9bLh_lsFRtIaIgFKpnIhHMUIme0KJXMDKtAMSyhQpEWCvKcZcAFh2GbCiNRKDZUXgA3fEyuD3cL13nvsNI7Zxvj9hqo_k1Fgz6mMtirgzW-Qb3tetcOr2lglKpM8B9eKF1Y</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Framework for Developing Systematic Testbeds for Multifidelity Optimization Techniques</title><source>ASME Transactions Journals (Current)</source><creator>Tao, Siyu ; Sharma, Chaitra ; Devanathan, Srikanth</creator><creatorcontrib>Tao, Siyu ; Sharma, Chaitra ; Devanathan, Srikanth</creatorcontrib><description>Multifidelity (MF) models abound in simulation-based engineering. Many MF strategies have been proposed to improve the efficiency in engineering processes, especially in design optimization. When it comes to assessing the performance of MF optimization techniques, existing practice usually relies on test cases involving contrived MF models of seemingly random math functions, due to limited access to real-world MF models. While it is acceptable to use contrived MF models, these models are often manually written up rather than created in a systematic manner. This gives rise to the potential pitfall that the test MF models may be not representative of general scenarios. We propose a framework to generate test MF models systematically and characterize MF optimization techniques' performances comprehensively. In our framework, the MF models are generated based on given high-fidelity (HF) models and come with two parameters to control their fidelity levels and allow model randomization. In our testing process, MF case problems are systematically formulated using our model creation method. Running the given MF optimization technique on these problems produces what we call “savings curves” that characterize the technique's performance similarly to how receiver operating characteristic (ROC) curves characterize machine learning classifiers. Our test results also allow plotting “optimality curves” that serve similar functionality to savings curves in certain types of problems. The flexibility of our MF model creation facilitates the development of testing processes for general MF problem scenarios, and our framework can be easily extended to other MF applications than optimization.</description><identifier>ISSN: 2377-2158</identifier><identifier>EISSN: 2377-2166</identifier><identifier>DOI: 10.1115/1.4065719</identifier><language>eng</language><publisher>ASME</publisher><ispartof>Journal of verification, validation, and uncertainty quantification, 2024-06, Vol.9 (2)</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-a140t-5e8646433e55b20cd869a2f182ed1fe57c81bb291353118275a6e582582bc13a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925,38520</link.rule.ids></links><search><creatorcontrib>Tao, Siyu</creatorcontrib><creatorcontrib>Sharma, Chaitra</creatorcontrib><creatorcontrib>Devanathan, Srikanth</creatorcontrib><title>A Framework for Developing Systematic Testbeds for Multifidelity Optimization Techniques</title><title>Journal of verification, validation, and uncertainty quantification</title><addtitle>J. Verif. Valid. Uncert</addtitle><description>Multifidelity (MF) models abound in simulation-based engineering. Many MF strategies have been proposed to improve the efficiency in engineering processes, especially in design optimization. When it comes to assessing the performance of MF optimization techniques, existing practice usually relies on test cases involving contrived MF models of seemingly random math functions, due to limited access to real-world MF models. While it is acceptable to use contrived MF models, these models are often manually written up rather than created in a systematic manner. This gives rise to the potential pitfall that the test MF models may be not representative of general scenarios. We propose a framework to generate test MF models systematically and characterize MF optimization techniques' performances comprehensively. In our framework, the MF models are generated based on given high-fidelity (HF) models and come with two parameters to control their fidelity levels and allow model randomization. In our testing process, MF case problems are systematically formulated using our model creation method. Running the given MF optimization technique on these problems produces what we call “savings curves” that characterize the technique's performance similarly to how receiver operating characteristic (ROC) curves characterize machine learning classifiers. Our test results also allow plotting “optimality curves” that serve similar functionality to savings curves in certain types of problems. The flexibility of our MF model creation facilitates the development of testing processes for general MF problem scenarios, and our framework can be easily extended to other MF applications than optimization.</description><issn>2377-2158</issn><issn>2377-2166</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNo90E1LAzEQBuAgCpbag3cPe_WwNZNsstljqbYKlR6s4C1kd2c1dT9qklXqr3e1RRiYYXgYhpeQS6BTABA3ME2oFClkJ2TEeJrGDKQ8_Z-FOicT77eUUpAACYMReZlFC2ca_Orce1R1LrrFT6y7nW1fo6e9D9iYYItogz7kWPo_8tjXwVa2xNqGfbTeBdvY74F17eCKt9Z-9OgvyFllao-TYx-T58XdZn4fr9bLh_lsFRtIaIgFKpnIhHMUIme0KJXMDKtAMSyhQpEWCvKcZcAFh2GbCiNRKDZUXgA3fEyuD3cL13nvsNI7Zxvj9hqo_k1Fgz6mMtirgzW-Qb3tetcOr2lglKpM8B9eKF1Y</recordid><startdate>20240601</startdate><enddate>20240601</enddate><creator>Tao, Siyu</creator><creator>Sharma, Chaitra</creator><creator>Devanathan, Srikanth</creator><general>ASME</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20240601</creationdate><title>A Framework for Developing Systematic Testbeds for Multifidelity Optimization Techniques</title><author>Tao, Siyu ; Sharma, Chaitra ; Devanathan, Srikanth</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a140t-5e8646433e55b20cd869a2f182ed1fe57c81bb291353118275a6e582582bc13a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Tao, Siyu</creatorcontrib><creatorcontrib>Sharma, Chaitra</creatorcontrib><creatorcontrib>Devanathan, Srikanth</creatorcontrib><collection>CrossRef</collection><jtitle>Journal of verification, validation, and uncertainty quantification</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tao, Siyu</au><au>Sharma, Chaitra</au><au>Devanathan, Srikanth</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Framework for Developing Systematic Testbeds for Multifidelity Optimization Techniques</atitle><jtitle>Journal of verification, validation, and uncertainty quantification</jtitle><stitle>J. Verif. Valid. Uncert</stitle><date>2024-06-01</date><risdate>2024</risdate><volume>9</volume><issue>2</issue><issn>2377-2158</issn><eissn>2377-2166</eissn><abstract>Multifidelity (MF) models abound in simulation-based engineering. Many MF strategies have been proposed to improve the efficiency in engineering processes, especially in design optimization. When it comes to assessing the performance of MF optimization techniques, existing practice usually relies on test cases involving contrived MF models of seemingly random math functions, due to limited access to real-world MF models. While it is acceptable to use contrived MF models, these models are often manually written up rather than created in a systematic manner. This gives rise to the potential pitfall that the test MF models may be not representative of general scenarios. We propose a framework to generate test MF models systematically and characterize MF optimization techniques' performances comprehensively. In our framework, the MF models are generated based on given high-fidelity (HF) models and come with two parameters to control their fidelity levels and allow model randomization. In our testing process, MF case problems are systematically formulated using our model creation method. Running the given MF optimization technique on these problems produces what we call “savings curves” that characterize the technique's performance similarly to how receiver operating characteristic (ROC) curves characterize machine learning classifiers. Our test results also allow plotting “optimality curves” that serve similar functionality to savings curves in certain types of problems. The flexibility of our MF model creation facilitates the development of testing processes for general MF problem scenarios, and our framework can be easily extended to other MF applications than optimization.</abstract><pub>ASME</pub><doi>10.1115/1.4065719</doi></addata></record>
fulltext fulltext
identifier ISSN: 2377-2158
ispartof Journal of verification, validation, and uncertainty quantification, 2024-06, Vol.9 (2)
issn 2377-2158
2377-2166
language eng
recordid cdi_crossref_primary_10_1115_1_4065719
source ASME Transactions Journals (Current)
title A Framework for Developing Systematic Testbeds for Multifidelity Optimization Techniques
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T16%3A11%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-asme_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Framework%20for%20Developing%20Systematic%20Testbeds%20for%20Multifidelity%20Optimization%20Techniques&rft.jtitle=Journal%20of%20verification,%20validation,%20and%20uncertainty%20quantification&rft.au=Tao,%20Siyu&rft.date=2024-06-01&rft.volume=9&rft.issue=2&rft.issn=2377-2158&rft.eissn=2377-2166&rft_id=info:doi/10.1115/1.4065719&rft_dat=%3Casme_cross%3E1200895%3C/asme_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true