Stereo Vision by Combination of Machine-Learning Techniques for Pedestrian Detection at Intersections Utilizing Surround-View Cameras

The frequency of pedestrian traffic accidents continues to increase in Japan. Thus, a driver assistance system is expected to reduce the number of accidents. However, it is difficult for the current environmental recognition sensors to detect crossing pedestrians when turning at intersections, owing...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of robotics and mechatronics 2020-06, Vol.32 (3), p.494-502
Hauptverfasser: Akita, Tokihiko, Yamauchi, Yuji, Fujiyoshi, Hironobu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The frequency of pedestrian traffic accidents continues to increase in Japan. Thus, a driver assistance system is expected to reduce the number of accidents. However, it is difficult for the current environmental recognition sensors to detect crossing pedestrians when turning at intersections, owing to the field of view and the cost. We propose a pedestrian detection system that utilizes surround-view fisheye cameras with a wide field of view. The system can be realized at low cost if the fisheye cameras are already equipped. It is necessary to detect the positioning of pedestrians accurately because more precise prediction of future collision points is required at intersections. Stereo vision is suitable for this purpose. However, there are some concerns regarding realizing stereo vision using fisheye cameras due to the distortion of the lens, asynchronous capturing, and fluctuating camera postures. As a countermeasure, we propose a novel method combining various machine-learning techniques. The D-Brief with histogram of oriented gradients and normalized cross-correlation are combined by a support-vector machine for stereo matching. A random forest was adopted to discriminate the pedestrians from noise in the 3D reconstructed point cloud. We evaluated this for images of crossing pedestrians at actual intersections. A tracking rate of 96.0% was achieved as the evaluation result. It was verified that this algorithm can accurately detect a pedestrian with an average position error of 0.17 m.
ISSN:0915-3942
1883-8049
DOI:10.20965/jrm.2020.p0494