Research and Engineering Implementation of Laser Point Cloud 3D Target Recognition Algorithm in Complex Scene

Here are many challenges for target detection in the complex scenes. In the real scene, such as the diversity of the type, quantity and shape of the parts on the workpiece plates, the model construction and rapid detection need to deal with the complex and changeable scene, and the large-scale massi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of physics. Conference series 2021-02, Vol.1828 (1), p.12155
Hauptverfasser: Shi, Shixi, Zhou, Yifu, Sun, Yang, Ding, Ziquan, Hou, Xiaoxiang, Li, Tao, Cao, Jihong, Tan, Yijun, Deng, Bu, Li, Hongyi, Zhang, Jun, Gu, Yaping
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Here are many challenges for target detection in the complex scenes. In the real scene, such as the diversity of the type, quantity and shape of the parts on the workpiece plates, the model construction and rapid detection need to deal with the complex and changeable scene, and the large-scale massive point cloud data challenges the operational efficiency and accuracy of visibility analysis technology. In this paper, the 3D modeling method based on computer graphics is proposed by using the displacement laser sensor as the digital representation of 3D complex scene on the scanning point cloud, which realizes the fast extraction of semantic target feature parameters under complex conditions. The multi-dimensional parameter-based saliency feature and fusion multiple filtering algorithm is built to obtain a fast and high-precision intelligent template recognition detection method. Based on SOC embedded engineering technology, the detection algorithm is realized by software programming. The two-dimensional mechanical motion platform is developed to test the workpiece plate applied to wheel hub of high-speed rail train, and the recognition accuracy is more than 99%.
ISSN:1742-6588
1742-6596
DOI:10.1088/1742-6596/1828/1/012155