GSDC Transformer: An Efficient and Effective Cue Fusion for Monocular Multi-Frame Depth Estimation
Depth estimation provides an alternative approach for perceiving 3D information in autonomous driving. Monocular depth estimation, whether with single-frame or multi-frame inputs, has achieved significant success by learning various types of cues and specializing in either static or dynamic scenes....
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Depth estimation provides an alternative approach for perceiving 3D
information in autonomous driving. Monocular depth estimation, whether with
single-frame or multi-frame inputs, has achieved significant success by
learning various types of cues and specializing in either static or dynamic
scenes. Recently, these cues fusion becomes an attractive topic, aiming to
enable the combined cues to perform well in both types of scenes. However,
adaptive cue fusion relies on attention mechanisms, where the quadratic
complexity limits the granularity of cue representation. Additionally, explicit
cue fusion depends on precise segmentation, which imposes a heavy burden on
mask prediction. To address these issues, we propose the GSDC Transformer, an
efficient and effective component for cue fusion in monocular multi-frame depth
estimation. We utilize deformable attention to learn cue relationships at a
fine scale, while sparse attention reduces computational requirements when
granularity increases. To compensate for the precision drop in dynamic scenes,
we represent scene attributes in the form of super tokens without relying on
precise shapes. Within each super token attributed to dynamic scenes, we gather
its relevant cues and learn local dense relationships to enhance cue fusion.
Our method achieves state-of-the-art performance on the KITTI dataset with
efficient fusion speed. |
---|---|
DOI: | 10.48550/arxiv.2309.17059 |