Deep learning–based traffic sign recognition for unmanned autonomous vehicles
Being one of the key techniques for unmanned autonomous vehicle, traffic sign recognition is applied to assist autopilot. Colors are very important clues to identify traffic signs; however, color-based methods suffer performance degradation in the case of light variation. Convolutional neural networ...
Gespeichert in:
Veröffentlicht in: | Proceedings of the Institution of Mechanical Engineers. Part I, Journal of systems and control engineering Journal of systems and control engineering, 2018-05, Vol.232 (5), p.497-505 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Being one of the key techniques for unmanned autonomous vehicle, traffic sign recognition is applied to assist autopilot. Colors are very important clues to identify traffic signs; however, color-based methods suffer performance degradation in the case of light variation. Convolutional neural network, as one of the deep learning methods, is able to hierarchically learn high-level features from the raw input. It has been proved that convolutional neural network–based approaches outperform the color-based ones. At present, inputs of convolutional neural networks are processed either as gray images or as three independent color channels; the learned color features are still not enough to represent traffic signs. Apart from colors, temporal constraint is also crucial to recognize video-based traffic signs. The characteristics of traffic signs in the time domain require further exploration. Quaternion numbers are able to encode multi-dimensional information, and they have been employed to describe color images. In this article, we are inspired to present a quaternion convolutional neural network–based approach to recognize traffic signs by fusing spatial and temporal features in a single framework. Experimental results illustrate that the proposed method can yield correct recognition results and obtain better performance when compared with the state-of-the-art work. |
---|---|
ISSN: | 0959-6518 2041-3041 |
DOI: | 10.1177/0959651818758865 |