Adaptive Training for Robust Object Detection in Autonomous Driving Environments

By leveraging data from both RADAR and LiDAR sensors, the accuracy of object detection and other autonomous driving tasks significantly improves in comparison to single-sensor approaches. This paper introduces a novel adaptation of the low-level fusion variant of Complex-YOLO, specifically designed...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent vehicles 2024-08, p.1-15
Hauptverfasser: Berens, Felix, Koschinski, Yasin, Badami, Manjunath Kristappa, Geimer, Marcus, Elser, Stefan, Reischl, Markus
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:By leveraging data from both RADAR and LiDAR sensors, the accuracy of object detection and other autonomous driving tasks significantly improves in comparison to single-sensor approaches. This paper introduces a novel adaptation of the low-level fusion variant of Complex-YOLO, specifically designed to cope with sensor disturbances. We develop and implement an enhanced training methodology that incorporates both functional sensor data and simulated sensor disturbances, allowing the network to maintain high performance even under data perturbations. Our empirical results demonstrate that this approach enables Complex-YOLO to effectively adapt to such disturbances, with an improved mean performance by 109% compared to a network trained only on undisturbed data. Key contributions include a robust training framework that integrates disturbance simulation directly into the training loop, significantly enhancing the detector's resilience in challenging environments, and an evaluation concept that quantitatively measures this improvement.
ISSN:2379-8858
2379-8904
DOI:10.1109/TIV.2024.3439001