Comparison of 2D and 3D vegetation species mapping in three natural scenarios using UAV-LiDAR point clouds and improved deep learning methods
•Our work is a pioneer in comparing 2D and 3D vegetation species mapping in natural scenarios using LiDAR point clouds.•3D deep learning vegetation species mapping (mF1 > 89.78 %) outperformed 2D mapping in karst wetland, mangrove forest, and hill forest.•Our MrFSNet improved 2D mapping performan...
Gespeichert in:
Veröffentlicht in: | International journal of applied earth observation and geoinformation 2023-12, Vol.125, p.103588, Article 103588 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •Our work is a pioneer in comparing 2D and 3D vegetation species mapping in natural scenarios using LiDAR point clouds.•3D deep learning vegetation species mapping (mF1 > 89.78 %) outperformed 2D mapping in karst wetland, mangrove forest, and hill forest.•Our MrFSNet improved 2D mapping performance through adaptively selecting optimal LiDAR feature combinations at each scale.•We proposed a dynamic weighted sampling strategy to resolve extreme category imbalances in 3D vegetation mapping.
Collaboration between Light Detection and Ranging (LiDAR) point clouds and deep learning has been proven to be an effective approach for vegetation mapping. Current studies have predominantly focused on 2D vegetation mapping, whereas 3D mapping, which directly classifies point clouds at point level, offers a more comprehensive understanding of the stratified structural information of vegetation. However, there is a lack of research on 3D vegetation species mapping, and the disparities between 2D and 3D mapping in natural scenarios remain unclear. To resolve these issues, we compared the deep learning performance of 2D and 3D vegetation species mapping across three distinct natural scenes: karst wetland, mangrove forest, and hill forest. In addition, the 2D and 3D mapping in natural scenes are adversely affected by the elevated channel count of LiDAR-derived features and the extreme category imbalance in point cloud. To mitigate these challenges, we propose a novel Multi-resolution Feature Selection Network (MrFSNet) to select optimal feature combinations at different scales for better 2D mapping performance. Additionally, we introduce a novel Dynamic Weighted Sampling (DWS) strategy, which is combined with KPConv to address the extreme category imbalance present in 3D mapping. Results indicate that: (1) 3D vegetation species mapping exhibited the highest performance, achieving an mF1 of 89.78% for karst wetland, 92.25% for mangrove forest, and 92.05% for hill forest. (2) 3D mapping outperformed 2D mapping, improving mF1 by 3.43% to 27.08%. (3) MrFSNet adaptively extracted optimal features at various scales and performed well with the limited training data in 2D vegetation mapping, resulting in a 1.66%–18.46% higher mF1 than that of Swin Transformer. (4) DWS effectively resolved the extreme category imbalance problem and produced 1.28%–2.80% higher mF1 than the non-DWS version in 3D vegetation mapping. |
---|---|
ISSN: | 1569-8432 1872-826X |
DOI: | 10.1016/j.jag.2023.103588 |