D3RoMa: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation
Depth sensing is an important problem for 3D vision-based robotics. Yet, a real-world active stereo or ToF depth camera often produces noisy and incomplete depth which bottlenecks robot performances. In this work, we propose D3RoMa, a learning-based depth estimation framework on stereo image pairs t...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Depth sensing is an important problem for 3D vision-based robotics. Yet, a
real-world active stereo or ToF depth camera often produces noisy and
incomplete depth which bottlenecks robot performances. In this work, we propose
D3RoMa, a learning-based depth estimation framework on stereo image pairs that
predicts clean and accurate depth in diverse indoor scenes, even in the most
challenging scenarios with translucent or specular surfaces where classical
depth sensing completely fails. Key to our method is that we unify depth
estimation and restoration into an image-to-image translation problem by
predicting the disparity map with a denoising diffusion probabilistic model. At
inference time, we further incorporated a left-right consistency constraint as
classifier guidance to the diffusion process. Our framework combines recently
advanced learning-based approaches and geometric constraints from traditional
stereo vision. For model training, we create a large scene-level synthetic
dataset with diverse transparent and specular objects to compensate for
existing tabletop datasets. The trained model can be directly applied to
real-world in-the-wild scenes and achieve state-of-the-art performance in
multiple public depth estimation benchmarks. Further experiments in real
environments show that accurate depth prediction significantly improves robotic
manipulation in various scenarios. |
---|---|
DOI: | 10.48550/arxiv.2409.14365 |