3DSFLabelling: Boosting 3D Scene Flow Estimation by Pseudo Auto-labelling
Learning 3D scene flow from LiDAR point clouds presents significant difficulties, including poor generalization from synthetic datasets to real scenes, scarcity of real-world 3D labels, and poor performance on real sparse LiDAR point clouds. We present a novel approach from the perspective of auto-l...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Learning 3D scene flow from LiDAR point clouds presents significant
difficulties, including poor generalization from synthetic datasets to real
scenes, scarcity of real-world 3D labels, and poor performance on real sparse
LiDAR point clouds. We present a novel approach from the perspective of
auto-labelling, aiming to generate a large number of 3D scene flow pseudo
labels for real-world LiDAR point clouds. Specifically, we employ the
assumption of rigid body motion to simulate potential object-level rigid
movements in autonomous driving scenarios. By updating different motion
attributes for multiple anchor boxes, the rigid motion decomposition is
obtained for the whole scene. Furthermore, we developed a novel 3D scene flow
data augmentation method for global and local motion. By perfectly synthesizing
target point clouds based on augmented motion parameters, we easily obtain lots
of 3D scene flow labels in point clouds highly consistent with real scenarios.
On multiple real-world datasets including LiDAR KITTI, nuScenes, and Argoverse,
our method outperforms all previous supervised and unsupervised methods without
requiring manual labelling. Impressively, our method achieves a tenfold
reduction in EPE3D metric on the LiDAR KITTI dataset, reducing it from $0.190m$
to a mere $0.008m$ error. |
---|---|
DOI: | 10.48550/arxiv.2402.18146 |