Lane marking extraction with combination strategy and comparative evaluation on synthetic and camera images

Lane detections and tracking are crucial stages for a great number of Advanced Driving Assistance Systems (ADAS), for instance for road lane following or robust ego localization. In these applications, the most important module is probably the lane marking primitives extraction algorithm. For severa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Pollard, Evangeline, Gruyer, Dominique, Tarel, Jean-Philippe, Ieng, Sio-Song, Cord, Aurelien
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Lane detections and tracking are crucial stages for a great number of Advanced Driving Assistance Systems (ADAS), for instance for road lane following or robust ego localization. In these applications, the most important module is probably the lane marking primitives extraction algorithm. For several decades, a lot of approaches have been proposed in order to achieve this task. Unfortunately, it is yet difficult to guarantee robust results from these extraction algorithms in case of bad weather conditions, degraded lane markings, or due to intrinsic limitations of cameras. In this paper we propose an approach in order to improve the quality of the lane marking extraction. By extraction, we mean the classification of the image pixels into two classes: marking and non-marking. The extraction is generally the first step of a marking detection system, so its efficiency has a strong impact on the performances of the whole system. The proposed algorithm is based on the combination of two different extraction algorithms. In order to validate the quality of this work, some tests and evaluations are provided and allow proving the efficiency of such an approach. The evaluation is performed on camera images and then on synthetic images. The results with camera and synthetic images are compared and discussed.
ISSN:2153-0009
2153-0017
DOI:10.1109/ITSC.2011.6083036