ULODNet: A Unified Lane and Obstacle Detection Network Towards Drivable Area Understanding in Autonomous Navigation

Drivable area understanding is an essential problem in the fields of robot autonomous navigation. Mobile robots or other autonomous vehicles need to perceive their surrounding environments such as obstacles, lanes and freespace to ensure safety. Many recent works have made great achievements benefit...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of intelligent & robotic systems 2022-05, Vol.105 (1), Article 4
Hauptverfasser: Zhang, Zhanpeng, Qin, Jiahu, Wang, Shuai, Kang, Yu, Liu, Qingchen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Drivable area understanding is an essential problem in the fields of robot autonomous navigation. Mobile robots or other autonomous vehicles need to perceive their surrounding environments such as obstacles, lanes and freespace to ensure safety. Many recent works have made great achievements benefiting from the breakthrough of deep learning. However, those methods resolve the challenge in a separated way which cause repeated utilization of resources in some occasions. Thus, we present a unified lane and obstacle detection network, ULODNet, which can detect the lanes and obstacles in a joint manner and further frame the drivable areas for mobile robots or other autonomous vehicles. To better coordinate the training of ULODNet, we also create a new dataset, CULane-ULOD Dataset, based on the widely used CULane Dataset. The new dataset contains both the lane labels and obstacle labels which the original dataset do not have. At last, to construct an integrated autonomous driving scheme, an area intersection paradigm is introduced to generate the driving commands by calculating the obstacle area proportion in the drivable regions. Moreover, the well-designed comparison experiments verify the efficiency and effectiveness of the new algorithm.
ISSN:0921-0296
1573-0409
DOI:10.1007/s10846-022-01606-3