Motion Inspired Unsupervised Perception and Prediction in Autonomous Driving
Learning-based perception and prediction modules in modern autonomous driving systems typically rely on expensive human annotation and are designed to perceive only a handful of predefined object categories. This closed-set paradigm is insufficient for the safety-critical autonomous driving task, wh...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Learning-based perception and prediction modules in modern autonomous driving
systems typically rely on expensive human annotation and are designed to
perceive only a handful of predefined object categories. This closed-set
paradigm is insufficient for the safety-critical autonomous driving task, where
the autonomous vehicle needs to process arbitrarily many types of traffic
participants and their motion behaviors in a highly dynamic world. To address
this difficulty, this paper pioneers a novel and challenging direction, i.e.,
training perception and prediction models to understand open-set moving
objects, with no human supervision. Our proposed framework uses self-learned
flow to trigger an automated meta labeling pipeline to achieve automatic
supervision. 3D detection experiments on the Waymo Open Dataset show that our
method significantly outperforms classical unsupervised approaches and is even
competitive to the counterpart with supervised scene flow. We further show that
our approach generates highly promising results in open-set 3D detection and
trajectory prediction, confirming its potential in closing the safety gap of
fully supervised systems. |
---|---|
DOI: | 10.48550/arxiv.2210.08061 |