Online adaptive selection of appropriate learning functions with parallel infilling strategy for Kriging-based reliability analysis
•Learning function is adaptively selected in the training process.•Learning function selection is transformed a multi-armed bandit problem.•A novel parallel learning function combining influence function is explored.•Influence function is used to estimate sample’s impact on learning function.•Adapti...
Gespeichert in:
Veröffentlicht in: | Computers & industrial engineering 2024-08, Vol.194, p.110361, Article 110361 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •Learning function is adaptively selected in the training process.•Learning function selection is transformed a multi-armed bandit problem.•A novel parallel learning function combining influence function is explored.•Influence function is used to estimate sample’s impact on learning function.•Adaptive allocation of learning functions with parallelizability is realized.
Adaptive Kriging surrogate modeling has been widely used in reliability analysis, of which the core is the adaptive learning process. However, for the adaptive learning process, there are two limitations in previous studies: on the one hand, there is no standard about how to select an appropriate learning function among multiple ones developed; on the other hand, the learning function can only pick one updating point per learning cycle, but the parallel computing strategy, which can speed up the computation significantly, has not been fully utilized. Therefore, this paper develops a novel Kriging-based reliability analysis method which can realize online adaptive selection of different learning functions from a variety of well-developed learning functions with a generalized parallel infilling strategy. First, the proposed method takes advantage of the Portfolio allocation strategy invented for the multi-armed bandit problem to pick up an appropriate learning function in each learning iteration, in which the learning function to be selected is determined online according to a reward function considering their previous performance on the closeness to the limit state. Then, a generalized parallel infilling strategy based on an artificial correlation-based influence function is utilized for different learning functions to obtain a desired number of candidate points in one refining cycle without updating the surrogate Kriging model. Finally, five cases, including an engineering application to a large spaceborn deployable antenna, are studied to demonstrate effectiveness of the proposed method. |
---|---|
ISSN: | 0360-8352 1879-0550 |
DOI: | 10.1016/j.cie.2024.110361 |