HPC challenges and opportunities of industrial-scale reactive fluidized bed simulation using meshes of several billion cells on the route of Exascale

Inside fluidized bed reactors, gas–solid flows are very complex: multi-scale, coupled, reactive, turbulent and unsteady. Accounting for them in an Euler-nfluid framework induces significantly expensive numerical simulations at academic scales and even more at industrial scales. 3D numerical simulati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Powder technology 2024-08, Vol.444, p.120018, Article 120018
Hauptverfasser: Neau, Hervé, Ansart, Renaud, Baudry, Cyril, Fournier, Yvan, Mérigoux, Nicolas, Koren, Chaï, Laviéville, Jérome, Renon, Nicolas, Simonin, Olivier
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Inside fluidized bed reactors, gas–solid flows are very complex: multi-scale, coupled, reactive, turbulent and unsteady. Accounting for them in an Euler-nfluid framework induces significantly expensive numerical simulations at academic scales and even more at industrial scales. 3D numerical simulations of gas–particle fluidized beds at industrial scales are limited by the High Performances Computing (HPC) capabilities of Computational Fluid Dynamics (CFD) software and by available computational power. In recent years, pre-Exascale supercomputers came into operation with better energy efficiency and continuously increasing computational resources. The present article is a direct continuation of previous work, Neau et al. (2020) which demonstrated the feasibility of a massively parallel simulation of an industrial-scale polydispersed fluidized-bed reactor with a mesh of 1 billion cells. Since then, we tried to push simulations of these systems to their limits by performing large-scale computations on even more recent and powerful supercomputers, once again using up to the entirety of these supercomputers (up to 286,000 cores). We used the same fluidized bed reactor but with more refined unstructured meshes: 8 and 64 billion cells. This article focuses on efficiency and performances of neptune_cfd code (based on Euler-nfluid approach) measured on several supercomputers with meshes of 1, 8 and 64 billion cells. It presents sensitivity studies conducted to improve HPC at these very large scales. On the basis of these highly-refined simulations of industrial scale systems using pre-Exascale supercomputers with neptune_cfd, we defined the upper limits of simulations we can manage efficiently in terms of mesh size, count of MPI processes and of simulation time. One billion cells computations are the most refined computation for production. Eight billion cells computations perform well up to 60,000 cores from a HPC point of view with an efficiency >85% but are still very expensive. The size of restart and mesh files is very large, post-processing is complicated and data management becomes near-impossible. 64 billion cells computations go beyond all limits: solver, supercomputer, MPI, file size, post-processing, data management. For these reasons, we barely managed to execute more than a few iterations. Over the last 30 years, neptune_cfd HPC capabilities improved exponentially by tracking hardware evolution and by implementing state-of-the-art techniques for parall
ISSN:0032-5910
1873-328X
DOI:10.1016/j.powtec.2024.120018