A Real-time Positioning Model for UAV’s Patrolling Images Based on Airborne LiDAR Point Cloud Fusion

The precise positioning issue of oblique aerial image has been widely studied in recent years. However, there are still some deficiencies in applying the existing methods to highly time-sensitive engineering. For the real-time positioning issues of oblique images involved in Unmanned Aerial Vehicle’...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:KSCE Journal of Civil Engineering 2024, 28(7), , pp.2952-2965
Hauptverfasser: Fan, Wei, Liu, Haojie, Pei, Haoyang, Tian, Shuaishuai, Liu, Yun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The precise positioning issue of oblique aerial image has been widely studied in recent years. However, there are still some deficiencies in applying the existing methods to highly time-sensitive engineering. For the real-time positioning issues of oblique images involved in Unmanned Aerial Vehicle’s (UAV’s) patrolling applications, existing photogrammetry method cannot meet the real-time positioning requirements, existing binocular vision method cannot meet the dynamic and precise positioning requirements, existing optical flow method cannot meet the absolute positioning requirements, and existing multi-source feature matching method cannot meet the robust positioning requirements. In order to meet the real-time, dynamic, precise, absolute and robust positioning requirements of UAV’s patrolling images, a real-time positioning model for UAV’s patrolling images based on airborne LiDAR point cloud fusion is proposed. First , a precise Digital Surface Model (DSM) is generated by rasterizing and imaging the raw airborne LiDAR point cloud, in which a pixel’s grayscale is exactly equal to elevation of local area covered by the pixel. Second , the generated DSM and UAV’s patrolling image are fused under specific geometric constrains, so as to realize real-time positioning of UAV’s patrolling image pixel by pixel. Finally , more precise positioning of selected key points on UAV’s patrolling image can be realized by performing Principal Component Analysis (PCA)on the raw airborne LiDAR point cloud that surrounds the selected key points. The above methods are analyzed and verified by three groups of practical experiments, and results indicate that the proposed model can achieve real-time positioning of a single UAV’s patrolling image (4000 × 6000 pixels) with an accuracy of 0.5 m within 0.38 seconds in arbitrary areas, and can further realize precise positioning of any selected key point on UAV’s patrolling image with an accuracy of 0.2 m in 0.001 seconds.
ISSN:1226-7988
1976-3808
DOI:10.1007/s12205-024-2254-2