Data-driven modeling of coarse mesh turbulence for reactor transient analysis using convolutional recurrent neural networks
•Data-driven coarse mesh turbulence model based on deep neural networks that can learn from high-resolution CFD data.•The proposed Dense-CNN/LSTM architecture can efficiently learn the spatial-temporal information from transient CFD results.•Good agreement observed between model predictions and test...
Gespeichert in:
Veröffentlicht in: | Nuclear engineering and design 2022-04, Vol.390, p.111716, Article 111716 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •Data-driven coarse mesh turbulence model based on deep neural networks that can learn from high-resolution CFD data.•The proposed Dense-CNN/LSTM architecture can efficiently learn the spatial-temporal information from transient CFD results.•Good agreement observed between model predictions and testing CFD data on reactor loss-of-flow transient case study.•Evaluated model’s generalization capability by exploring intrisic data similarity.
Advanced nuclear reactors often exhibit complex thermal-fluid phenomena during transients. To accurately capture such phenomena, a coarse-mesh three-dimensional (3-D) modeling capability is desired for modern nuclear-system code. In the coarse-mesh 3-D modeling of advanced-reactor transients that involve flow and heat transfer, accurately predicting the turbulent viscosity is a challenging task that requires an accurate and computationally efficient model to capture the unresolved fine-scale turbulence.
In this paper, we propose a data-driven coarse-mesh turbulence model based on local flow features for the transient analysis of thermal mixing and stratification in a sodium-cooled fast reactor. The model has a coarse-mesh setup to ensure computational efficiency, while it is trained by fine-mesh computational fluid dynamics (CFD) data to ensure accuracy. A novel neural network architecture, combining a densely connected convolutional network and a long-short-term-memory network, is developed that can efficiently learn from the spatial-temporal CFD transient simulation results. The neural network model was trained and optimized on a loss-of-flow transient and demonstrated high accuracy in predicting the turbulent viscosity field during the whole transient. The trained model’s generalization capability was also investigated on two other transients with different inlet conditions. The study demonstrates the potential of applying the proposed data-driven approach to support the coarse-mesh multi-dimensional modeling of advanced reactors. |
---|---|
ISSN: | 0029-5493 1872-759X |
DOI: | 10.1016/j.nucengdes.2022.111716 |