A Novel Method for Improving Point Cloud Accuracy in Automotive Radar Object Recognition

High-quality environmental perceptions are crucial for self-driving cars. Integrating multiple sensors is the predominant research direction for enhancing the accuracy and resilience of autonomous driving systems. Millimeter-wave radar has recently gained attention from the academic community owing...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023-01, Vol.11, p.1-1
Hauptverfasser: Lu, Guowei, He, Zhenhua, Zhang, Shengkai, Huang, Yanqing, Zhong, Yi, Li, ZHUO, Han, Yi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:High-quality environmental perceptions are crucial for self-driving cars. Integrating multiple sensors is the predominant research direction for enhancing the accuracy and resilience of autonomous driving systems. Millimeter-wave radar has recently gained attention from the academic community owing to its unique physical properties that complement other sensing modalities, such as vision. Unlike cameras and LIDAR, millimeter-wave radar is not affected by light or weather conditions, has a high penetration capability, and can operate day and night, making it an ideal sensor for object tracking and identification. However, the longer wavelengths of millimeter-wave signals present challenges, including sparse point clouds and susceptibility to multipath effects, which limit their sensing accuracies. To enhance the object recognition capability of millimeter-wave radar, we propose a GAN-based point cloud enhancement method that converts sparse point clouds into RF images with richer semantic information, ultimately improving the accuracy of tasks such as object detection and semantic segmentation. We evaluated our method on the CARRADA and nuScenes datasets, and the experimental results demonstrate that our approach improves the object classification accuracy by 14.01% and semantic segmentation by 4.88% compared to current state-of-the-art methods.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3280544