Accelerating the Inference of the Exa.TrkX Pipeline

Recently, graph neural networks (GNNs) have been successfully used for a variety of particle reconstruction problems in high energy physics, including particle tracking. The Exa.TrkX pipeline based on GNNs demonstrated promising performance in reconstructing particle tracks in dense environments. It...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-02
Hauptverfasser: Lazar, Alina, Ju, Xiangyang, Murnane, Daniel, Calafiura, Paolo, Farrell, Steven, Xu, Yaoyuan, Spiropulu, Maria, Jean-Roch Vlimant, Cerati, Giuseppe, Gray, Lindsey, Klijnsma, Thomas, Kowalkowski, Jim, Atkinson, Markus, Neubauer, Mark, Gage DeZoort, Savannah Thais, Hsu, Shih-Chieh, Aurisano, Adam, Hewes, V, Ballow, Alexandra, Acharya, Nirajan, Chun-yi, Wang, Liu, Emma, Lucas, Alberto
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recently, graph neural networks (GNNs) have been successfully used for a variety of particle reconstruction problems in high energy physics, including particle tracking. The Exa.TrkX pipeline based on GNNs demonstrated promising performance in reconstructing particle tracks in dense environments. It includes five discrete steps: data encoding, graph building, edge filtering, GNN, and track labeling. All steps were written in Python and run on both GPUs and CPUs. In this work, we accelerate the Python implementation of the pipeline through customized and commercial GPU-enabled software libraries, and develop a C++ implementation for inferencing the pipeline. The implementation features an improved, CUDA-enabled fixed-radius nearest neighbor search for graph building and a weakly connected component graph algorithm for track labeling. GNNs and other trained deep learning models are converted to ONNX and inferenced via the ONNX Runtime C++ API. The complete C++ implementation of the pipeline allows integration with existing tracking software. We report the memory usage and average event latency tracking performance of our implementation applied to the TrackML benchmark dataset.
ISSN:2331-8422
DOI:10.48550/arxiv.2202.06929