Spatiotemporal Scene-Graph Embedding for Autonomous Vehicle Collision Prediction

In autonomous vehicles (AVs), early warning systems rely on collision prediction to ensure occupant safety. However, state-of-the-art methods using deep convolutional networks either fail at modeling collisions or are too expensive/slow, making them less suitable for deployment on AV edge hardware....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE internet of things journal 2022-06, Vol.9 (12), p.9379-9388
Hauptverfasser: Malawade, Arnav Vaibhav, Yu, Shih-Yuan, Hsu, Brandon, Muthirayan, Deepan, Khargonekar, Pramod P., Faruque, Mohammad Abdullah Al
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In autonomous vehicles (AVs), early warning systems rely on collision prediction to ensure occupant safety. However, state-of-the-art methods using deep convolutional networks either fail at modeling collisions or are too expensive/slow, making them less suitable for deployment on AV edge hardware. To address these limitations, we propose SG2VEC, a spatiotemporal scene-graph embedding methodology that uses the graph neural network (GNN) and long short-term memory (LSTM) layers to predict future collisions via visual scene perception. We demonstrate that SG2VEC predicts collisions 8.11% more accurately and 39.07% earlier than the state-of-the-art method on synthesized data sets, and 29.47% more accurately on a challenging real-world collision data set. We also show that SG2VEC is better than the state of the art at transferring knowledge from synthetic data sets to real-world driving data sets. Finally, we demonstrate that SG2VEC performs inference 9.3\times faster with an 88.0% smaller model, 32.4% less power, and 92.8% less energy than the state-of-the-art method on the industry-standard Nvidia DRIVE PX 2 platform, making it more suitable for implementation on the edge.
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2022.3141044