End-to-End Self-Driving Approach Independent of Irrelevant Roadside Objects With Auto-Encoder

On a highway, the frequency of occurrence of irrelevant features, such as trees, varies a lot in different scenes. A limitation of the deep conventional neural networks used in end-to-end self-driving systems is that if the incoming images contain too much information, it makes it difficult for the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent transportation systems 2022-01, Vol.23 (1), p.641-650
Hauptverfasser: Wang, Tinghan, Luo, Yugong, Liu, Jinxin, Chen, Rui, Li, Keqiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:On a highway, the frequency of occurrence of irrelevant features, such as trees, varies a lot in different scenes. A limitation of the deep conventional neural networks used in end-to-end self-driving systems is that if the incoming images contain too much information, it makes it difficult for the network to extract only the subset of features required for decision making. Consequently, while existing end-to-end approaches may perform well in training scenes, they may not work correctly in other scenes. In this study, we developed a novel training method for an auto-encoder that equips it to ignore irrelevant features in input images while simultaneously retaining relevant features. Compared with feature extraction methods in existing end-to-end approaches, the proposed method reduces the labeling costs by only requiring image-level tags. The method was validated by training a convolutional neural network model to process the output of the encoder and produce a steering angle to control the vehicle. The entire end-to-end self-driving approach can ignore the influence of irrelevant features even though there are no such features when training the convolutional neural network.
ISSN:1524-9050
1558-0016
DOI:10.1109/TITS.2020.3018473