Graph-based Topology Reasoning for Driving Scenes
Understanding the road genome is essential to realize autonomous driving. This highly intelligent problem contains two aspects - the connection relationship of lanes, and the assignment relationship between lanes and traffic elements, where a comprehensive topology reasoning method is vacant. On one...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Understanding the road genome is essential to realize autonomous driving.
This highly intelligent problem contains two aspects - the connection
relationship of lanes, and the assignment relationship between lanes and
traffic elements, where a comprehensive topology reasoning method is vacant. On
one hand, previous map learning techniques struggle in deriving lane
connectivity with segmentation or laneline paradigms; or prior lane
topology-oriented approaches focus on centerline detection and neglect the
interaction modeling. On the other hand, the traffic element to lane assignment
problem is limited in the image domain, leaving how to construct the
correspondence from two views an unexplored challenge. To address these issues,
we present TopoNet, the first end-to-end framework capable of abstracting
traffic knowledge beyond conventional perception tasks. To capture the driving
scene topology, we introduce three key designs: (1) an embedding module to
incorporate semantic knowledge from 2D elements into a unified feature space;
(2) a curated scene graph neural network to model relationships and enable
feature interaction inside the network; (3) instead of transmitting messages
arbitrarily, a scene knowledge graph is devised to differentiate prior
knowledge from various types of the road genome. We evaluate TopoNet on the
challenging scene understanding benchmark, OpenLane-V2, where our approach
outperforms all previous works by a great margin on all perceptual and
topological metrics. The code is released at
https://github.com/OpenDriveLab/TopoNet |
---|---|
DOI: | 10.48550/arxiv.2304.05277 |