Self-supervised depth super-resolution with contrastive multiview pre-training
Many low-level vision tasks, including guided depth super-resolution (GDSR), struggle with the issue of insufficient paired training data. Self-supervised learning is a promising solution, but it remains challenging to upsample depth maps without the explicit supervision of high-resolution target im...
Gespeichert in:
Veröffentlicht in: | Neural networks 2023-11, Vol.168, p.223-236 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Many low-level vision tasks, including guided depth super-resolution (GDSR), struggle with the issue of insufficient paired training data. Self-supervised learning is a promising solution, but it remains challenging to upsample depth maps without the explicit supervision of high-resolution target images. To alleviate this problem, we propose a self-supervised depth super-resolution method with contrastive multiview pre-training. Unlike existing contrastive learning methods for classification or segmentation tasks, our strategy can be applied to regression tasks even when trained on a small-scale dataset and can reduce information redundancy by extracting unique features from the guide. Furthermore, we propose a novel mutual modulation scheme that can effectively compute the local spatial correlation between cross-modal features. Exhaustive experiments demonstrate that our method attains superior performance with respect to state-of-the-art GDSR methods and exhibits good generalization to other modalities.
•We propose a self-supervised method for depth upsampling with contrastive multiview pre-training.•In the pre-training, we develop an autoencoder-based framework with contrastive and reconstruction losses.•We propose a novel mutual modulation scheme.•Experimental results demonstrate the superiority of the proposed method against SOTA methods. |
---|---|
ISSN: | 0893-6080 1879-2782 |
DOI: | 10.1016/j.neunet.2023.09.023 |