Early experiences on the OLCF Frontier system with AthenaPK and Parthenon‐Hydro
Summary The Oak Ridge Leadership Computing Facility (OLCF) has been preparing the nation's first exascale system, Frontier, for production and end users. Frontier is based on HPE Cray's new EX architecture and Slingshot interconnect and features 74 cabinets of optimized 3rd Gen AMD EPYC CP...
Gespeichert in:
Veröffentlicht in: | Concurrency and computation 2024-06, Vol.36 (13), p.n/a |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Summary
The Oak Ridge Leadership Computing Facility (OLCF) has been preparing the nation's first exascale system, Frontier, for production and end users. Frontier is based on HPE Cray's new EX architecture and Slingshot interconnect and features 74 cabinets of optimized 3rd Gen AMD EPYC CPUs for HPC and AI and AMD Instinct 250X accelerators. As a part of this preparation, “real‐world” user codes have been selected to help assess the functionality, performance, and usability of the system. This article describes early experiences using the system in collaboration with the Hamburg Observatory for two selected codes, which have since been adopted in the OLCF test harness. Experiences discussed include efforts to resolve performance variability and per‐cycle slowdowns. Results are shown for a performance portable astrophysical magnetohydronamics code, AthenaPK, and a mini‐application stressing the core functionality of a performance portable block‐structured adaptive mesh refinement framework, Parthenon‐Hydro. These results show good scaling characteristics to the full system. At the largest scale, the Parthenon‐Hydro miniapp reaches a total of 1.7×1013 zone‐cycles/s on 9216 nodes (73,728 logical GPUs) at ≈92% weak scaling parallel efficiency (starting from a single node using a second‐order, finite‐volume method). |
---|---|
ISSN: | 1532-0626 1532-0634 |
DOI: | 10.1002/cpe.8069 |