3-D Object Tracking in Panoramic Video and LiDAR for Radiological Source-Object Attribution and Improved Source Detection

Networked detector systems can be deployed in urban environments to aid in the detection and localization of radiological and/or nuclear material. However, effectively responding to and interpreting a radiological alarm using spectroscopic data alone may be hampered by a lack of situational awarenes...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on nuclear science 2021-02, Vol.68 (2), p.189-202
Hauptverfasser: Marshall, M. R., Hellfeld, D., Joshi, T. H. Y., Salathe, M., Bandstra, M. S., Bilton, K. J., Cooper, R. J., Curtis, J. C., Negut, V., Shurley, A. J., Vetter, K.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Networked detector systems can be deployed in urban environments to aid in the detection and localization of radiological and/or nuclear material. However, effectively responding to and interpreting a radiological alarm using spectroscopic data alone may be hampered by a lack of situational awareness, particularly in complex environments. This study investigates the use of Light Detection and Ranging (LiDAR) and streaming video to enable real-time object detection and tracking, and the fusion of this tracking information with radiological data for the purposes of enhanced situational awareness and increased detection sensitivity. This work presents an object detection, tracking, and novel source-object attribution analysis that is capable of operating in real time. By implementing this analysis pipeline on a custom-developed system that comprises a static 2 in. \times 4 in. \times16 in. NaI(Tl) detector colocated with a 64-beam LiDAR and four monocular cameras, we demonstrate the ability to accurately correlate trajectories from tracked objects to spectroscopic gamma-ray data in real time and use physics-based models to reliably discriminate between source-carrying and nonsource-carrying objects. In this work, we describe our approach in detail and present a quantitative performance assessment that characterizes the source-object attribution capabilities of both video and LiDAR. Additionally, we demonstrate the ability to simultaneously track pedestrians and vehicles in a mock urban environment and use this tracking information to improve both detection sensitivity and situational awareness using our contextual-radiological data fusion methodology.
ISSN:0018-9499
1558-1578
DOI:10.1109/TNS.2020.3047646