Multi-View Incremental Segmentation of 3-D Point Clouds for Mobile Robots

Mobile robots need to create high-definition three-dimensional (3-D) maps of the environment for applications such as remote surveillance and infrastructure mapping. Accurate semantic processing of the acquired 3-D point cloud is critical for allowing the robot to obtain a high-level understanding o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE robotics and automation letters 2019-04, Vol.4 (2), p.1240-1246
Hauptverfasser: Chen, Jingdao, Cho, Yong Kwon, Kira, Zsolt
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Mobile robots need to create high-definition three-dimensional (3-D) maps of the environment for applications such as remote surveillance and infrastructure mapping. Accurate semantic processing of the acquired 3-D point cloud is critical for allowing the robot to obtain a high-level understanding of the surrounding objects and perform context-aware decision making. Existing techniques for point cloud semantic segmentation are mostly applied on a single frame or offline basis, with no way to integrate the segmentation results over time. This letter proposes an online method for mobile robots to incrementally build a semantically rich 3-D point cloud of the environment. The proposed deep neural network, MCPNet, is trained to predict class labels and object instance labels for each point in the scanned point cloud in an incremental fashion. A multi-view context pooling (MCP) operator is used to combine point features obtained from multiple viewpoints to improve the classification accuracy. The proposed architecture was trained and evaluated on ray-traced scans derived from the Stanford 3-D Indoor Spaces dataset. Results show that the proposed approach led to 15% improvement in pointwise accuracy and 7% improvement in normalized mutual information compared to the next best online method, with only a 6% drop in accuracy compared to the PointNet-based offline approach.
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2019.2894915