Analyzing Deep Learning Representations of Point Clouds for Real-Time In-Vehicle LiDAR Perception
LiDAR sensors are an integral part of modern autonomous vehicles as they provide an accurate, high-resolution 3D representation of the vehicle's surroundings. However, it is computationally difficult to make use of the ever-increasing amounts of data from multiple high-resolution LiDAR sensors....
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | LiDAR sensors are an integral part of modern autonomous vehicles as they
provide an accurate, high-resolution 3D representation of the vehicle's
surroundings. However, it is computationally difficult to make use of the
ever-increasing amounts of data from multiple high-resolution LiDAR sensors. As
frame-rates, point cloud sizes and sensor resolutions increase, real-time
processing of these point clouds must still extract semantics from this
increasingly precise picture of the vehicle's environment. One deciding factor
of the run-time performance and accuracy of deep neural networks operating on
these point clouds is the underlying data representation and the way it is
computed. In this work, we examine the relationship between the computational
representations used in neural networks and their performance characteristics.
To this end, we propose a novel computational taxonomy of LiDAR point cloud
representations used in modern deep neural networks for 3D point cloud
processing. Using this taxonomy, we perform a structured analysis of different
families of approaches. Thereby, we uncover common advantages and limitations
in terms of computational efficiency, memory requirements, and representational
capacity as measured by semantic segmentation performance. Finally, we provide
some insights and guidance for future developments in neural point cloud
processing methods. |
---|---|
DOI: | 10.48550/arxiv.2210.14612 |