Learning Collision-Free Space Detection From Stereo Images: Homography Matrix Brings Better Data Augmentation

Collision-free space detection is a critical component of autonomous vehicle perception. The state-of-the-art algorithms are typically based on supervised deep learning. Their performance is dependent on the quality and amount of labeled training data. It remains an open challenge to train deep conv...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/ASME transactions on mechatronics 2022-02, Vol.27 (1), p.225-233
Hauptverfasser: Fan, Rui, Wang, Hengli, Cai, Peide, Wu, Jin, Bocus, Mohammud Junaid, Qiao, Lei, Liu, Ming
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Collision-free space detection is a critical component of autonomous vehicle perception. The state-of-the-art algorithms are typically based on supervised deep learning. Their performance is dependent on the quality and amount of labeled training data. It remains an open challenge to train deep convolutional neural networks (DCNNs) using only a small quantity of training samples. Therefore, in this article, we mainly explore an effective training data augmentation approach that can be employed to improve the overall DCNN performance, when additional images captured from different views are available. Due to the fact that the pixels in collision-free space (generally regarded as a planar surface) between two images, captured from different views, can be associated using a homography matrix, the target image can be transformed into the reference view. This provides a simple but effective way to generate training data from additional multiview images. Extensive experimental results, conducted with six state-of-the-art semantic segmentation DCNNs on three datasets, validate the effectiveness of the proposed method for enhancing collision-free space detection performance. When validated on the KITTI road benchmark, our approach provides the best results, compared with other state-of-the-art stereo vision-based collision-free space detection approaches.
ISSN:1083-4435
1941-014X
DOI:10.1109/TMECH.2021.3061077