Ultrafast processing of pixel detector data with machine learning frameworks

Modern photon science performed at high repetition rate free-electron laser (FEL) facilities and beyond relies on 2D pixel detectors operating at increasing frequencies (towards 100 kHz at LCLS-II) and producing rapidly increasing amounts of data (towards TB/s). This data must be rapidly stored for...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Blaj, G., Chang, C.-E., Kenney, C. J.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Modern photon science performed at high repetition rate free-electron laser (FEL) facilities and beyond relies on 2D pixel detectors operating at increasing frequencies (towards 100 kHz at LCLS-II) and producing rapidly increasing amounts of data (towards TB/s). This data must be rapidly stored for offline analysis and summarized in real time for online feedback to the scientists. While at LCLS all raw data has been stored, at LCLS-II this would lead to a prohibitive cost; instead, enabling real time processing of pixel detector data (dark, gain, common mode, background, charge summing, subpixel position, photon counting, data summarization) allows reducing the size and cost of online processing, offline processing and storage by orders of magnitude while preserving full photon information. This could be achieved by taking advantage of the compressibility of sparse data typical for LCLS-II applications. Faced with a similar big data challenge a decade ago, computer vision stimulated revolutionary advances in machine learning hardware and software. We investigated if these developments are useful in the field of data processing for high speed pixel detectors and found that typical deep learning models and autoencoder architectures failed to yield useful noise reduction while preserving full photon information, presumably because of the very different statistics and feature sets in computer vision and radiation imaging. However, the raw performance of modern frameworks like Tensorflow inspired us to redesign in Tensorflow mathematically equivalent versions of the state-of-the-art, “classical” algorithms used at LCLS. The novel Tensorflow models resulted in elegant, compact and hardware agnostic code, gaining 1 to 2 orders of magnitude faster processing on an inexpensive consumer GPU, reducing by 3 orders of magnitude the projected cost of online analysis and compression without photon loss at LCLS-II. The novel Tensorflow models also enabled ongoing development of a pipelined hardware system expected to yield an additional 3 to 4 orders of magnitude speedup, necessary for meeting the data acquisition and storage requirements at LCLS-II, potentially enabling acquiring every single FEL pulse at full speed. Computer vision a decade ago was dominated by hand-crafted filters; their structure inspired the deep learning revolution resulting in modern deep convolutional networks; similarly, our novel Tensorflow filters provide inspiration for designing future deep l
ISSN:0094-243X
1551-7616
DOI:10.1063/1.5084708