LEARD-Net: Semantic segmentation for large-scale point cloud scene

Given the prominence of 3D sensors in recent years, 3D point cloud scene data are worthy to be further investigated. Point cloud scene understanding is a challenging task because of its characteristics of large-scale and discrete. In this study, we propose a network called LEARD-Net, focuses on sema...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of applied earth observation and geoinformation 2022-08, Vol.112, p.102953, Article 102953
Hauptverfasser: Zeng, Ziyin, Xu, Yongyang, Xie, Zhong, Tang, Wei, Wan, Jie, Wu, Weichao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Given the prominence of 3D sensors in recent years, 3D point cloud scene data are worthy to be further investigated. Point cloud scene understanding is a challenging task because of its characteristics of large-scale and discrete. In this study, we propose a network called LEARD-Net, focuses on semantic segmentation for the large-scale point cloud scene data with color information. The proposed network contains three main components: (1) To fully utilize color information of point clouds rather than just as initial input features, we propose a robust local feature extraction module (LFE) to benefit the network focus on both spatial geometric structure, color information and semantic features. (2) We propose a local feature aggregation module (LFA) to benefit the network to focus on the local significant features while also focus on the entire local neighbor. (3) To allow the network to focus on both local and comprehensive features, we use residual and dense connections (ResiDense) to connect different-level LFE and LFA modules. Comparing with state-of-the-art networks on several large-scale benchmark datasets, including S3DIS, Toronto3D and Semantic3D, we demonstrate the effectiveness of our LEARD-Net. •A local feature extraction module (LFE) to take advantage of color information.•A local feature aggregation module (LFA) to augment local information while capturing local significant features and entire local neighbor.•A ResiDense module to allow the network to focus on both local and comprehensive features.•Our method achieves excellent performance on large-scale benchmarks, including S3DIS, Toronto3D and Semantic3D.
ISSN:1569-8432
1872-826X
DOI:10.1016/j.jag.2022.102953