Urban vegetation segmentation using terrestrial LiDAR point clouds based on point non-local means network

•Combining voxel-based and point-wise method to extract features from large scenes point clouds for improving accuracy.•Long-range features from non-local branch of PointNLM for complex scenes and sloped terrain.•Fusion layers of neighborhood max-pooling concatenate local features and non-local feat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of applied earth observation and geoinformation 2021-12, Vol.105, p.102580, Article 102580
Hauptverfasser: Chen, Yiping, Wu, Rongren, Yang, Chengzhe, Lin, Yaojin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Combining voxel-based and point-wise method to extract features from large scenes point clouds for improving accuracy.•Long-range features from non-local branch of PointNLM for complex scenes and sloped terrain.•Fusion layers of neighborhood max-pooling concatenate local features and non-local features for every supervoxel for enhancing features expression.•The algorithm significantly improves the accuracy of tree segmentation from uban scomplex environments, especially for low vegetation. Urban vegetation inventory at city-scale using terrestrial light detection and ranging (LiDAR) point clouds is very challenging due to the large quantity of points, varying local density, and occlusion effects, leading to missing features and incompleteness of data. This paper proposes a novel method, named Point Non-Local Means (PointNLM) network, which incorporates the supervoxel-based and point-wise for automatic semantic segmentation of vegetation from large scale complex scene point clouds. PointNLM captures the long-range relationship between groups of points via a non-local branch cascaded three times to describe sharp geometric features. Simultaneously, a local branch processes the position of scattered feature points and captures the low and high level features. Finally, we proposed a fusion layer of neighborhood max-pooling method to concatenate the long-range features, low level features and high level features for segmenting the trees. The proposed architecture was evaluated on three datasets, including two open access datasets of Semantic3D and Paris-Lille-3D, and an in-house dataset acquired by a commercial mobile LiDAR system. Experimental results indicated that the proposed method provides an efficient and robust result for vegetation segmentation, achieving an Intersection over Union (IoU) of 94.4%, F1-score of 92.7% and overall accuracy of 96.3%, respectively.
ISSN:1569-8432
1872-826X
DOI:10.1016/j.jag.2021.102580