OcclusionFusion: Occlusion-aware Motion Estimation for Real-time Dynamic 3D Reconstruction
RGBD-based real-time dynamic 3D reconstruction suffers from inaccurate inter-frame motion estimation as errors may accumulate with online tracking. This problem is even more severe for single-view-based systems due to strong occlusions. Based on these observations, we propose OcclusionFusion, a nove...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | RGBD-based real-time dynamic 3D reconstruction suffers from inaccurate
inter-frame motion estimation as errors may accumulate with online tracking.
This problem is even more severe for single-view-based systems due to strong
occlusions. Based on these observations, we propose OcclusionFusion, a novel
method to calculate occlusion-aware 3D motion to guide the reconstruction. In
our technique, the motion of visible regions is first estimated and combined
with temporal information to infer the motion of the occluded regions through
an LSTM-involved graph neural network. Furthermore, our method computes the
confidence of the estimated motion by modeling the network output with a
probabilistic model, which alleviates untrustworthy motions and enables robust
tracking. Experimental results on public datasets and our own recorded data
show that our technique outperforms existing single-view-based real-time
methods by a large margin. With the reduction of the motion errors, the
proposed technique can handle long and challenging motion sequences. Please
check out the project page for sequence results:
https://wenbin-lin.github.io/OcclusionFusion. |
---|---|
DOI: | 10.48550/arxiv.2203.07977 |