Communication-overlap techniques for improved strong scaling of gyrokinetic Eulerian code beyond 100k cores on the K-computer

Plasma turbulence research based on five-dimensional (5D) gyrokinetic simulations is one of the most critical and demanding issues in fusion science. To pioneer new physics regimes both in problem sizes and in timescales, an improvement of strong scaling is essential. Overlap of computations and com...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The international journal of high performance computing applications 2014-02, Vol.28 (1), p.73-86
Hauptverfasser: Idomura, Yasuhiro, Nakata, Motoki, Yamada, Susumu, Machida, Masahiko, Imamura, Toshiyuki, Watanabe, Tomohiko, Nunami, Masanori, Inoue, Hikaru, Tsutsumi, Shigenobu, Miyoshi, Ikuo, Shida, Naoyuki
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Plasma turbulence research based on five-dimensional (5D) gyrokinetic simulations is one of the most critical and demanding issues in fusion science. To pioneer new physics regimes both in problem sizes and in timescales, an improvement of strong scaling is essential. Overlap of computations and communications using non-blocking MPI communication schemes is a promising approach to improving strong scaling, but it often fails on practical applications with conventional MPI libraries. In this work, this classical issue is resolved by developing communication-overlap techniques with additional MPI support for non-blocking communication routines and with heterogeneous OpenMP threads, which work even on conventional MPI libraries and network hardware. These techniques dramatically improved the parallel efficiency of a gyrokinetic toroidal 5D Eulerian code GT5D on the K-computer, which has a dedicated network, and on the Helios system which has a commodity network. On the K-computer, excellent strong scaling was achieved beyond 100k cores whilst keeping a sustained performance of ~ 10% ( ~ 307 TFlops using 196,608 cores), and simulations for next-generation large-scale fusion experiments are significantly accelerated. This performance is 16 × sped up compared with the maximum performance reported at the 2011 International Conference for High Performance Computing, Networking, Storage and Analysis ( ~ 19 TFlops using 16,384 cores of the BX900 cluster) (Idomura, 2011).
ISSN:1094-3420
1741-2846
DOI:10.1177/1094342013490973