Robust lane line segmentation based on group feature enhancement

Training a robust deep model for lane line segmentation is challenging due to the complex and changeable scenarios in autonomous driving, such as poor lighting conditions, ambiguous lane lines, and inclement weather. Without further extraction of effective deep features, such models often fail when...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Engineering applications of artificial intelligence 2023-01, Vol.117, p.105568, Article 105568
Hauptverfasser: Gao, Xin, Bai, Hanlin, Xiong, Yijin, Bao, Zefeng, Zhang, Guoying
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Training a robust deep model for lane line segmentation is challenging due to the complex and changeable scenarios in autonomous driving, such as poor lighting conditions, ambiguous lane lines, and inclement weather. Without further extraction of effective deep features, such models often fail when faced with a large number of unstructured lane lines. In this article, we propose a grouped feature enhancement(GFE) method, ENet-GFE that uses the similarity between feature channels to further extract effective features while reducing noise. We group feature channels, evaluate the importance of each sub-feature in the channel and space, and learn within each group based on the relevance and similarity of the content to obtain more effective output features by using the semantic similarity between the lane line feature information and feature channels. GFE can be easily incorporated into a feedforward convolutional neural network(CNN) without adding too much computing overhead. We validated the proposed method on the widely used CULane lane line segmentation dataset and the latest autonomous driving dataset, OpenMPD. In experiments, ENet-GFE exhibited the most advanced performance. Currently, it is one of the lightest models.
ISSN:0952-1976
1873-6769
DOI:10.1016/j.engappai.2022.105568