Joint-Confidence-guided Multi-Task Learning for 3D Reconstruction and Understanding from Monocular Camera

3D reconstruction and understanding from monocular camera is a key issue in computer vision. Recent learning-based approaches, especially multi-task learning, significantly achieve the performance of the related tasks. However a few works still have limitation in drawing loss-spatial-aware informati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2023-01, Vol.PP, p.1-1
Hauptverfasser: Wang, Yufan, Zhao, Qunfei, Gan, Yangzhou, Xia, Zeyang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:3D reconstruction and understanding from monocular camera is a key issue in computer vision. Recent learning-based approaches, especially multi-task learning, significantly achieve the performance of the related tasks. However a few works still have limitation in drawing loss-spatial-aware information. In this paper, we propose a novel Joint-confidence-guided network (JCNet) to simultaneously predict depth, semantic labels, surface normal, and joint confidence map for corresponding loss functions. In details, we design a Joint Confidence Fusion and Refinement (JCFR) module to achieve multi-task feature fusion in the unified independent space, which can also absorb the geometric-semantic structure feature in the joint confidence map. We use confidence-guided uncertainty generated by the joint confidence map to supervise the multi-task prediction across the spatial and channel dimensions. To alleviate the training attention imbalance among different loss functions or spatial regions, the Stochastic Trust Mechanism (STM) is designed to stochastically modify the elements of joint confidence map in the training phase. Finally, we design a calibrating operation to alternately optimize the joint confidence branch and the other parts of JCNet to avoid overfiting. The proposed methods achieve state-of-the-art performance in both geometric-semantic prediction and uncertainty estimation on NYU-Depth V2 and Cityscapes.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2023.3240834