A Framework for Developing Systematic Testbeds for Multifidelity Optimization Techniques

Multifidelity (MF) models abound in simulation-based engineering. Many MF strategies have been proposed to improve the efficiency in engineering processes, especially in design optimization. When it comes to assessing the performance of MF optimization techniques, existing practice usually relies on...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of verification, validation, and uncertainty quantification validation, and uncertainty quantification, 2024-06, Vol.9 (2)
Hauptverfasser: Tao, Siyu, Sharma, Chaitra, Devanathan, Srikanth
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Multifidelity (MF) models abound in simulation-based engineering. Many MF strategies have been proposed to improve the efficiency in engineering processes, especially in design optimization. When it comes to assessing the performance of MF optimization techniques, existing practice usually relies on test cases involving contrived MF models of seemingly random math functions, due to limited access to real-world MF models. While it is acceptable to use contrived MF models, these models are often manually written up rather than created in a systematic manner. This gives rise to the potential pitfall that the test MF models may be not representative of general scenarios. We propose a framework to generate test MF models systematically and characterize MF optimization techniques' performances comprehensively. In our framework, the MF models are generated based on given high-fidelity (HF) models and come with two parameters to control their fidelity levels and allow model randomization. In our testing process, MF case problems are systematically formulated using our model creation method. Running the given MF optimization technique on these problems produces what we call “savings curves” that characterize the technique's performance similarly to how receiver operating characteristic (ROC) curves characterize machine learning classifiers. Our test results also allow plotting “optimality curves” that serve similar functionality to savings curves in certain types of problems. The flexibility of our MF model creation facilitates the development of testing processes for general MF problem scenarios, and our framework can be easily extended to other MF applications than optimization.
ISSN:2377-2158
2377-2166
DOI:10.1115/1.4065719