STADet: Streaming Timing-Aware Video Lane Detection

Lane detection is a fundamental task in autonomous driving, which lies in the real-time detection of lanes of streaming video during driving. We address the lack of temporal flow understanding of existing video lane detectors, propose a streaming video lane detection training framework, and focus on...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2024-09, Vol.34 (9), p.8644-8656
Hauptverfasser: He, Kaijie, Xie, Jun, Dai, Xinguang, Chang, Kenglun, Chen, Feng, Wang, Zhepeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Lane detection is a fundamental task in autonomous driving, which lies in the real-time detection of lanes of streaming video during driving. We address the lack of temporal flow understanding of existing video lane detectors, propose a streaming video lane detection training framework, and focus on building a series of inter-frame temporal information conduction structures. Specifically, we propose the Deformable Spatio-Temporal Attention (DSTA) module, which accurately captures the instantaneous changing features and position shifts between frames and incorporates key information under different spatio-temporal conditions. Also, to maintain long-time memory at a very low computational cost, we design instance caches that suggest possible lanes for the current frame and resist short-time lane disappearance based on historical memory. We experimented with the inclusion of background category prediction, which is able to simply filter low-confidence false predictions of lanes, while also conveying a more holistic and uniform relationship between lanes and background to the model. These methods allow our model to achieve a significant lead in the video lane detection dataset VIL-100, reaching an accuracy of 94.9 at a speed of 39 FPS.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2024.3389731