Learning Piecewise Planar Representation for RGB Guided Depth Super-Resolution
RGB guided depth super-resolution (GDSR) aims to reconstruct high-resolution (HR) depth images from low-resolution ones using HR RGB images as guidance, overcoming the resolution limitation of depth cameras. The main challenge in this task is how to effectively explore the HR information from RGB im...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on computational imaging 2024, Vol.10, p.1266-1279 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | RGB guided depth super-resolution (GDSR) aims to reconstruct high-resolution (HR) depth images from low-resolution ones using HR RGB images as guidance, overcoming the resolution limitation of depth cameras. The main challenge in this task is how to effectively explore the HR information from RGB images while avoiding texture being over-transferred. To address this challenge, we propose a novel method for GSDR based on the piecewise planar representation in the 3D space, which naturally focuses on the geometry information of scenes without concerning the internal textures. Specifically, we design a plane-aware interaction module to effectively bridge the RGB and depth modalities and perform information interaction by taking piecewise planes as the intermediary. We also devise a plane-guided fusion module to further remove modality-inconsistent information. To mitigate the distribution gap between synthetic and real-world data, we propose a self-training adaptation strategy for the real-world deployment of our method. Comprehensive experimental results on multiple representative datasets demonstrate the superiority of our method over existing state-of-the-art GDSR methods. |
---|---|
ISSN: | 2573-0436 2333-9403 |
DOI: | 10.1109/TCI.2024.3439990 |