Density-Imbalance-Eased LiDAR Point Cloud Upsampling via Feature Consistency Learning

3D point-cloud upsampling, a crucial perceptual module to help us understand the complex scene and object, aims to generate a high-resolution point cloud given a sparse point set. While considerable attention has been paid to single object point-cloud upsampling, literature on upsampling complex sce...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent vehicles 2023-04, Vol.8 (4), p.2875-2887
Hauptverfasser: Chen, Tso-Yuan, Hsiao, Ching-Chun, Huang, Ching-Chun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:3D point-cloud upsampling, a crucial perceptual module to help us understand the complex scene and object, aims to generate a high-resolution point cloud given a sparse point set. While considerable attention has been paid to single object point-cloud upsampling, literature on upsampling complex scenes has emerged slowly. Remarkably, few related works target LiDAR-based point clouds, which inherently have a density imbalance problem over distance. This paper proposed LiUpNet, a LiDAR-based point cloud upsampling network. Given a sparse and imbalanced point cloud of a natural and complex scene, LiUpNet extracts robust regional features and generates a uniformly distributed dense point cloud that preserves the fine-grained structural architecture. Specifically, in LiUpNet, an attentive and transformer-based feature extractor is applied to learn the detailed regional representation to model the underlying complex local structure. Also, a novel density-invariant feature consistency loss is introduced to improve the robustness of the learned features against the sparsity changes. Finally, given the regional representations, we append a manifold-based upsampler to super-resolve the regional point clouds up to different scales according to the sparsity and implicitly regresses non-linear 3D scene surfaces. A more uniformly distributed and denser scene point cloud is then achieved with the scheme of dynamic regional upsampling rates. The experimental results on single-object and scene point-cloud upsampling show that LiUpNet outperforms related works qualitatively and quantitatively. Additionally, we demonstrate that the enhanced scene point clouds can efficiently improve the performance of downstream tasks such as point-cloud-based 3D object detection and depth completion.
ISSN:2379-8858
2379-8904
DOI:10.1109/TIV.2022.3162672