Learning Cascaded Detection Tasks with Weakly-Supervised Domain Adaptation
In order to handle the challenges of autonomous driving, deep learning has proven to be crucial in tackling increasingly complex tasks, such as 3D detection or instance segmentation. State-of-the-art approaches for image-based detection tasks tackle this complexity by operating in a cascaded fashion...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In order to handle the challenges of autonomous driving, deep learning has
proven to be crucial in tackling increasingly complex tasks, such as 3D
detection or instance segmentation. State-of-the-art approaches for image-based
detection tasks tackle this complexity by operating in a cascaded fashion: they
first extract a 2D bounding box based on which additional attributes, e.g.
instance masks, are inferred. While these methods perform well, a key challenge
remains the lack of accurate and cheap annotations for the growing variety of
tasks. Synthetic data presents a promising solution but, despite the effort in
domain adaptation research, the gap between synthetic and real data remains an
open problem. In this work, we propose a weakly supervised domain adaptation
setting which exploits the structure of cascaded detection tasks. In
particular, we learn to infer the attributes solely from the source domain
while leveraging 2D bounding boxes as weak labels in both domains to explain
the domain shift. We further encourage domain-invariant features through
class-wise feature alignment using ground-truth class information, which is not
available in the unsupervised setting. As our experiments demonstrate, the
approach is competitive with fully supervised settings while outperforming
unsupervised adaptation approaches by a large margin. |
---|---|
DOI: | 10.48550/arxiv.2107.04523 |